Somewhere in the early 2010s, “microservices” became the answer to a question most engineering organisations hadn’t finished asking yet. The benefits were real — independent deployability, team autonomy, the ability to scale specific components without scaling everything — and the pattern spread rapidly, often outpacing the organisational and architectural maturity needed to do it well.

What many organisations ended up building, without necessarily realising it, is what I’d call a distributed monolith: a collection of services that are deployed independently but are so tightly coupled in their runtime dependencies that they behave like a monolith in every way that matters. When service A breaks, services B, C, and D break. When you want to change the data schema, you have to coordinate a deployment across six teams. When you want to understand the blast radius of a configuration change, you have to trace dependencies through a graph that nobody has fully documented.

You have all the operational complexity of microservices and none of the benefits. It’s arguably worse than the original monolith.

How to tell if you have one

The tell-tale signs are fairly consistent across the organisations I’ve seen fall into this pattern.

Synchronous runtime coupling is the biggest one. Services that communicate synchronously and can’t degrade gracefully when a dependency is slow or unavailable are tightly coupled by definition, regardless of whether they’re deployed as separate containers. If bringing down one service cascades into a system-wide incident, you don’t have independent services — you have a monolith that’s harder to observe.

Shared databases are another common sign. When multiple services read from and write to the same database, the database becomes the real system boundary. Services may be independently deployable at the application layer, but any schema change requires cross-team coordination and careful sequencing. The boundary you drew in your service graph doesn’t match the boundary that actually matters.

The coordination overhead test is simpler: if shipping a single customer-facing feature requires coordinating releases across three or more services owned by different teams, that’s a signal that your service decomposition doesn’t match your domain decomposition. You’ve created organisational friction without creating independence.

When a monolith is the right call

There’s an assumption in most of these conversations that goes unexamined: microservices are the destination and a monolith is a problem to be solved. That’s not right.

A well-structured monolith is a completely defensible architectural choice for many organisations, many products, and many stages of a system’s life. It’s operationally simpler, easier to reason about, easier to refactor when your domain understanding changes (which it always does, especially early), and requires less infrastructure investment to run and monitor.

The argument for microservices is strongest when you have: genuine need for independent scalability across different parts of the system, multiple teams that need to move independently without coordinating every deployment, and service boundaries that map cleanly to domain boundaries you’re confident in. If those three things aren’t true, the operational cost of microservices may not be worth the benefits.

Many startups and early-stage products would be better served by a modular monolith — clean internal boundaries, good separation of concerns, designed to be split later if needed — than by a premature decomposition that creates distributed system problems before the team is equipped to handle them.

Having this conversation across a proud org

This is the politically delicate part. Engineering teams that have spent two years building a microservices architecture have an identity investment in that architecture. Telling them they’ve built a distributed monolith, without care, is received as an attack on their judgment and their work. It shouldn’t be — distributed monoliths are an extremely common outcome and not a sign of incompetence — but that’s how it often lands.

What I’ve coached my managers to do is start from the present problem rather than the past decision. Not “your architecture has these flaws” but “we keep having this type of incident / this type of coordination delay — what’s driving it?” When the teams do the diagnosis themselves, they arrive at the same conclusions, and the path forward is theirs rather than something imposed from above.

What you’re ultimately trying to reach, at the org level, is a shared understanding that the service boundaries you have aren’t necessarily the service boundaries you need — and that adjusting them is normal engineering work, not an admission of failure. The best architectures are the ones that got revised as understanding deepened, not the ones that got it right the first time.

Further Reading

  1. The Relationship Between Cloud Architecture and Business Agility
  2. Infrastructure Is Never Just Infrastructure
  3. How I Think About Technical Debt as a Business Tradeoff
  4. Making Architectural Decisions Without Being the Smartest Person in the Room
  5. What Building Products for Enterprises Taught Me About Simplicity