System Design

Monoliths vs Microservices: When Each Is Right

Around 2014 microservices became the default architectural recommendation. Every conference talk, every engineering blog, every architecture interview presumed that the modern way to build a backend was to split it into dozens of small services. Teams that did not were treated as behind the curve.

The pendulum has swung back. In 2023, Amazon Prime Video published a post describing how they moved a piece of their pipeline from microservices back to a monolith and cut costs by 90%. Shopify has been vocal for years about their "modular monolith" approach. Basecamp, Stack Overflow, and GitHub have always been monoliths and show no signs of changing.

The current consensus, if there is one, is that microservices are a powerful tool for specific problems, not a default architecture. This article is an attempt to describe when each genuinely wins.

Terms, briefly

A monolith is a single deployable unit that contains the entire application. Changes ship as one release. All components run in the same process and share the same database.

A microservices architecture decomposes the application into multiple independently deployable services, each responsible for a bounded domain, each owning its own data store, communicating over the network (HTTP, gRPC, message queues).

A modular monolith is a monolith that enforces internal module boundaries through compile-time checks, package visibility, or API layers — so the code is well-structured, but it all deploys together. Most healthy monoliths are modular.

flowchart LR subgraph Monolith direction TB M[App Process] M --- Auth[Auth module] M --- Orders[Orders module] M --- Billing[Billing module] M --- Mail[Email module] M --- MDB[(Shared DB)] end subgraph Microservices direction TB GW[API Gateway] GW --> AuthS[Auth Service] --> AuthDB[(Auth DB)] GW --> OrdersS[Orders Service] --> OrdersDB[(Orders DB)] GW --> BillingS[Billing Service] --> BillingDB[(Billing DB)] OrdersS -.->|events| MailS[Email Service] end

Left: one deployable, one database, function calls between modules. Right: multiple deployables, each with its own data store, communicating over the network.

The case for a monolith

Operational simplicity

One deployable unit means one CI/CD pipeline, one set of credentials, one set of metrics dashboards, one runbook. You do not need service discovery, a service mesh, distributed tracing tooling, or a twelve-person platform team to keep it running. For small teams, this is enormous.

Local reasoning

In a monolith, calling another module is a function call. You get IDE navigation, compile-time type checking, stack traces that cover the full request path, and refactors that span the whole application at once. In microservices, the equivalent is a network call: async, timeout-prone, serialised, and observable only through correlation IDs and trace dashboards.

Transactions

If your business logic needs to atomically update related data, a monolith gives you database transactions for free. In microservices, the equivalent requires either a saga, a two-phase commit across services (rarely a good idea), or eventual consistency with compensating actions. Distributed transactions are a class of problem that does not exist in a monolith.

Performance

A function call inside a process takes nanoseconds. A local HTTP call over the loopback interface takes hundreds of microseconds. A call to another pod in a Kubernetes cluster takes milliseconds. For chatty internal communication, the monolith is orders of magnitude faster before you have tuned anything.

Cost

A monolith can run on one machine. Microservices typically run on many. You pay for the fixed infrastructure cost of each service (minimum instance count, ingress, monitoring) even if most of them are idle. Amazon Prime Video's famous post-mortem boils down to: the serialisation and inter-service traffic cost more than the business value of the decomposition.

The case for microservices

Team autonomy

The strongest argument for microservices is organisational, not technical. If you have five teams working on the same codebase, they will step on each other's feet during deployments, reviews, and test suites. Splitting the codebase along team boundaries gives each team its own release cadence, its own on-call, its own technology choices. Conway's Law made literal.

Below a certain team size, this argument does not apply. If you have one team of eight, microservices introduce coordination costs without resolving any organisational friction.

Genuinely different scaling profiles

Some components of a system have wildly different resource profiles. The image-processing service needs GPUs; the login service needs a small amount of RAM; the analytics ingester needs enormous throughput on cheap machines. Giving each its own deployable lets you scale them independently and on appropriate hardware. This is a real microservices win that a monolith cannot replicate.

Different technology stacks

If one component needs Python for machine learning and another needs Go for concurrency and another needs Rust for low-latency, microservices let each team pick. Monoliths, by contrast, mostly force one language choice across the whole application.

That said, most teams do not actually need polyglot backends. Many microservices codebases end up in the same language anyway, and the "flexibility" argument stays theoretical.

Fault isolation

A bug in one microservice does not crash the others. A bug in a monolith can (depending on language and threading model) take down the whole application. This is a real benefit, but it is often overstated — a well-designed monolith with thread isolation and graceful degradation achieves a lot of the same isolation.

The hidden costs of microservices

Observability is expensive

In a monolith, a request produces a single log line and a single stack trace. In microservices, a single user request may touch a dozen services. Understanding what happened requires correlation IDs propagated through every service, distributed tracing infrastructure (Jaeger, Zipkin, OpenTelemetry), and log aggregation. Building this well takes months; building it badly makes production incidents harder to debug than they were in the monolith.

Network failures become application failures

Every inter-service call is a new failure mode. The service could be down. The network could be slow. The response could be malformed. Every call site needs timeouts, retries, circuit breakers, and fallbacks. These patterns exist for good reasons; they are also work your engineers have to do and get right.

Data consistency is hard

When each service owns its own database, "user signs up and receives welcome email" is no longer a transaction. If the email service is down when the user signs up, you need a retry queue. If the user cancels during the retry window, you need a compensating action. Sagas, outbox patterns, and eventual consistency are all tools for this; all of them are harder than the one-line transaction in a monolith.

Duplicated infrastructure

Each service needs its own deployment pipeline, monitoring, logging, secrets management, TLS certificates, and container image. At N services, you have roughly N times the infrastructure overhead. Platform teams exist to amortise this, but they are expensive.

Cognitive load

A new engineer joining the team needs to understand what services exist, what each one does, which team owns each, how they communicate, and where the shared concerns live (auth, rate limiting, observability). Onboarding to a well-organised monolith is a faster path to productivity for most people.

The modular monolith as a middle path

Many teams who started with microservices, regretted it, and went back, ended up with what is now called a modular monolith. The shape:

  • One deployable unit.
  • Internal module boundaries are strictly enforced (package visibility, interface contracts, build-time dependency checks).
  • Each module owns a slice of the database schema, with no cross-module foreign keys.
  • Cross-module calls go through defined interfaces, not direct database access.

The benefits: you get most of the structural benefits of microservices (clear boundaries, focused ownership, independent reasoning) without the operational overhead. If one module later genuinely needs its own deployable (different scaling, different tech, different team), the boundaries are already there to extract it.

Shopify's approach is the most-cited real-world example. Their Ruby monolith runs a significant fraction of global e-commerce and has survived Black Friday scaling pressure that would break many microservices architectures.

Signs you picked wrong

Some diagnostic symptoms:

You chose microservices but should not have

  • Engineers spend significant time in coordination meetings because changes require updates across multiple services.
  • Deployments are still effectively coupled — you cannot ship service A without also shipping service B.
  • Most bugs take longer to diagnose than they would have in a single codebase.
  • Your infrastructure bill is surprisingly large relative to traffic.
  • New engineers struggle to understand how requests flow through the system.

You chose a monolith but should not have

  • Different teams' changes frequently conflict during code review or deployment.
  • A component genuinely needs different hardware (GPUs, high-memory, low-latency) and you cannot get it without over-provisioning everything.
  • Build and test times have grown so long they are bottlenecking the team.
  • A single incident regularly takes down functionality that is logically unrelated.

If none of these symptoms apply, your current architecture is probably fine.

Frequently Asked Questions

What team size should switch from monolith to microservices?

There is no fixed number, but below 15-20 engineers you almost certainly do not need microservices. Somewhere between 30 and 100 the organisational benefits start to outweigh the operational costs for many teams. Amazon's famous "two-pizza team" rule of thumb suggests each microservice should fit on one team of 6-10 people.

Can I go from monolith to microservices later?

Yes, and this is usually the right order. Start with a monolith. Identify module boundaries that hold up over time. Extract the ones that have genuine independent-scaling or independent-release needs. This is called the Strangler Fig pattern and is the most common path into microservices. The reverse path — microservices back to monolith — also happens but is more painful.

Are microservices still a good resume skill?

Yes, because almost every large company has them, and knowing how to operate them well (observability, resilience, deployment) is a real skill. But the industry is more nuanced about them than it was in 2018. Saying "I built a monolith because the problem did not warrant microservices" is now a positive signal rather than a negative one.

What about serverless? Is that microservices?

Serverless functions are microservices taken to an extreme — each function is its own deployable. They share many of the benefits (fine-grained scaling) and many of the drawbacks (observability, coordination), plus a few unique ones (cold starts, vendor lock-in, local development friction). Useful for event-driven workloads; not a replacement for your main application architecture.

Share your thoughts

Worked with this in production and have a story to share, or disagree with a tradeoff? Email us at support@mybytenest.com — we read everything.