Every organisation I’ve seen try to get cloud costs under control starts by creating a FinOps team or a Cloud Centre of Excellence. The reasoning is intuitive: costs are rising, we need specialists, let’s centralise the expertise and the accountability. And it works — for a while. The team runs analyses, identifies waste, creates savings plans, maybe sets up a tagging policy. Costs come down, leadership is happy, the team gets credit.

Then, six months later, costs are rising again. The FinOps team is overwhelmed. Engineering teams are annoyed by requests they see as overhead. And the fundamental problem — the one that created the cost growth in the first place — hasn’t changed at all.

The centralisation trap

A centralised FinOps team can tell you what’s expensive. It can’t fix it, because the people who can fix it are the engineers who wrote the code. The decisions that drive cloud spend happen during architecture reviews, sprint planning, and code reviews — not in a finance dashboard. When cost accountability lives in a separate team, the people making the actual spending decisions have no stake in the outcome.

This is not a criticism of FinOps as a discipline. FinOps done well is genuinely valuable. The problem is organisational: you can’t govern your way to cost efficiency if governance is the only mechanism you’re using.

The opposite failure is equally real. Some organisations skip the central team entirely and tell every engineering team to own their own costs. This sounds good in theory — skin in the game, decentralised accountability, engineering teams with real ownership. In practice, most engineering teams don’t have the financial analysis skills or the tooling to do this well, and cost becomes just another priority competing with feature work and reliability. Without shared infrastructure — common tagging schemas, shared dashboards, central anomaly detection — distributed ownership fragments into distributed ignorance.

What actually changes behaviour

The pattern that works, in my experience, is a thin central layer that does two things: sets standards and surfaces information. Standards meaning a consistent tagging taxonomy so you can attribute costs to teams, services, and environments. Anomaly alerts that route to the team that owns the resource, not to a central inbox. Shared tooling that makes cost data available without requiring a finance degree to interpret it.

The actual accountability, though, has to live in the engineering teams. And it doesn’t get there through policy or mandates — it gets there when cost becomes visible in the places where engineers already work. Not in a monthly report that gets skimmed and forgotten, but in pull request checks that flag unexpectedly expensive infrastructure changes, in deployment pipelines that show projected cost impact, in team dashboards that show spend alongside reliability and velocity.

AWS Budget alerts are a good example of this in practice. A budget alert that fires to a finance team is noise. The same alert routed to the engineering team that owns the service — with enough context to understand what caused it — is actionable. The difference isn’t the technology; it’s who receives the information and whether they can do something with it.

Tagging as culture, not compliance

Tagging deserves its own mention because it’s the thing most organisations get wrong. Tagging policies get published, teams are told to tag resources, nobody does it consistently, and six months later you have forty percent of your spend attributable to “unknown” or “miscellaneous.”

Tags fail when they’re treated as a compliance requirement. They work when teams understand why they matter — because without them, nobody can tell you which service is eating most of your compute budget, and nobody can route cost anomalies to the right owner. I’ve had more success getting engineering teams to tag consistently when I frame it as “this is how you’ll know if something is going wrong with your service’s cost” than when I frame it as “this is required by the governance policy.”

The cultural piece

The change I’ve seen make the biggest difference is senior engineering leaders treating cost anomalies the same way they treat latency regressions. That shift doesn’t happen through announcements — it happens when cost gets discussed in engineering reviews the same way reliability does, when cost regressions get treated with the same seriousness as performance regressions, when the team that does a smart cost optimisation gets the same recognition as the team that ships a big feature.

In organisations running significant cloud infrastructure, cost is real and it compounds. Treating it as someone else’s problem doesn’t make the bills smaller — it just means you’re surprised by them.

The goal isn’t a FinOps team that watches your spend. It’s engineering teams that own it.

Further Reading

  1. Cloud Cost Governance That Actually Sticks
  2. Why Your Cloud Bill Is a Leadership Problem
  3. What FinOps Gets Wrong About Engineering Teams
  4. Your Cloud Bill Is an Engineering Problem. Start Treating It Like One.
  5. The Future of Cloud Cost Management Isn’t a Tool