I have a lot of respect for the FinOps discipline. The people who do it well understand cloud pricing models deeply, can find waste that engineering teams miss, and provide genuine business value. But I’ve also watched FinOps programs fail repeatedly in ways that are predictable once you understand the underlying mistake: they treat cloud cost as a financial problem when it’s actually an engineering behavior problem — and those require very different interventions.
The dashboard that nobody opens
Here’s what typically happens when an organisation gets serious about cloud costs. A FinOps team is formed, or a centralised Cloud CoE gets tasked with cost governance. They build excellent dashboards — detailed, accurate breakdowns of spend by service, by account, by region. They produce monthly reports showing trends, anomalies, and optimisation opportunities. They present these to engineering leaders in quarterly reviews.
The engineering leaders nod. The dashboards don’t get opened. Costs continue trending the way they were trending.
The problem isn’t the dashboards. The dashboards are fine. The problem is the assumption that showing engineers a cost number will change the decisions that created that cost number. It won’t, because the decisions that drive cloud spend aren’t made in quarterly reviews. They’re made when a developer is sizing an EC2 instance, choosing a data transfer architecture, or setting a retention policy on a log group. By the time a cost shows up in a monthly report, the code that created it was written weeks or months ago.
The latency of feedback
Engineers respond to feedback that’s close in time to the decisions they make. A failing test in a pull request changes behaviour. A monthly finance report that shows the cost of decisions made sixty days ago does not.
This isn’t a criticism of engineers — it’s how humans work. FinOps programs that run on reporting cycles are working against that dynamic. What changes behaviour is making cost visible in the places where engineers already work: in the deployment pipeline, in the infrastructure-as-code review, in the development environment. A pull request check that flags “this change is estimated to add $400/month to your compute bill, compared to current baseline” is a behaviour-change tool. A quarterly dashboard is not.
What actually shifts behaviour
The most effective cost culture changes I’ve seen have a few things in common. They make individual teams directly accountable for specific spend numbers they can see and influence — not an abstract total shared across the organisation, but a per-service or per-team view that connects to decisions that team actually controls. They put cost alerts in the hands of the team that owns the resource, not in a central inbox. And they treat unexpected cost growth the same way they treat an unexpected latency spike: as a signal worth investigating, not as an accounting issue to sort out at end of quarter.
AWS Budgets and Cost Anomaly Detection are examples of public tooling that can support this. Neither of these is magic — the technical setup is straightforward. The harder work is organisational: routing alerts to engineering teams, making sure those teams understand what the alert means, and building a response process so that “cost anomaly detected” doesn’t sit unacknowledged for two weeks.
The ask for engineering leaders
If you lead engineering teams with meaningful cloud spend, the most useful thing you can do probably isn’t push for a better FinOps dashboard. It’s to start talking about cost in engineering reviews with the same regularity you talk about performance and reliability. Not dominating the conversation — cost should be one lens among many — but treating it as a legitimate engineering concern rather than something finance manages.
When your engineers see that cloud cost is something their engineering leader cares about, they start caring about it. Not because of mandates, but because culture is shaped by what leaders pay attention to. That’s a slow process, but it’s the only one that sticks.
Monthly reports can tell you what happened, but they can’t change what your engineers build next — that requires a different kind of visibility, and a different kind of leadership attention.