Microservices or monolith: how do you choose?
The most reliable signal is team size and structure: small single teams with a monolith, multiple teams with independent release needs with microservices. The second signal is the real difference in scalability requirements between components: if everything scales together, there's no need to decompose.
The most underrated solution is the Modular Monolith: the benefits of code separation without the operational cost of microservices. It works for the vast majority of systems that are not Amazon or Netflix.

This guide is part of the complete section on design patterns and software architecture in C# and .NET.
Almost every architectural discussion eventually arrives here: microservices or monolith? And it's almost always followed by the most useless answer in software architecture: "it depends." That's not an answer. It's an excuse for not taking a position.
This article takes a position and provides a concrete decision framework based on real signals, not on tech hype or "that's what Netflix and Amazon do." Because Netflix has 200 million users and thousands of engineers. Your project probably doesn't.
The starting point is an uncomfortable premise: the majority of projects that adopted microservices in the last five years didn't need them. Many are going back to the monolith or more moderate architectural variants. Amazon itself, which popularized microservices, published a technical post-mortem in 2023 where Prime Video engineers consolidated a microservices system into a monolith, reducing costs by 90%.
This is not an isolated case. It's a symptom of a systemic problem: software architecture is chosen based on marketing and conference talks, not based on the actual needs of the project. This guide helps you do exactly the opposite: start from your context, not from the abstract ideal.
If you are reading this article, you are probably facing a concrete decision: you are starting a new project, or you are evaluating whether the current system needs to change. In both cases, what matters is not the technology itself, but understanding which signals to look at and which questions to ask before choosing.
The microservices myth: how marketing distorted architectural reality
Microservices became popular around 2014-2015, driven by conference talks from Netflix, Amazon, and Spotify on how they made their systems scalable. The message that most of the industry received was simple: microservices are the future, the monolith is the past.
The problem is that this message completely lost the original context. Netflix didn't choose microservices because they are "better." They chose them because they had a specific problem: dozens of independent teams that needed to ship new features every day without blocking each other, on a platform with hundreds of millions of users and extreme availability requirements. The monolith, in that specific context, could no longer hold.
The vast majority of companies that develop software don't have that problem. They have 3 developers working on a business management system, or 10 people building a SaaS platform for a vertical market. Applying Netflix's architecture to these contexts is not "doing things right." It's pure over-engineering that increases complexity without any real benefit.
Martin Fowler, one of the theorists who helped formalize microservices, has written explicitly that the default recommendation should be not to use them. The starting point should be the monolith, from which services are extracted only when concrete signals justify it. This isn't said by someone who "doesn't understand" microservices. It's said by someone who understands them deeply.
Sam Newman, author of the reference book on microservices, has stated on multiple occasions that many teams adopt microservices without having the organizational and technical foundations to manage them correctly. The result is what he calls a "distributed monolith": all the problems of microservices with none of the benefits.
What a monolith is in 2026: precise definitions and different types
Before comparing the two architectures, it's essential to align on definitions. "Monolith" in 2026 doesn't necessarily mean "badly written spaghetti code." There are different types of monolith with very different characteristics.
The traditional monolith (single deployable unit)
The traditional monolith is a single deployable artifact that contains all the application logic. An ASP.NET Core project with all controllers, all services, all business logic in a single assembly running on a single process. The database is shared by the entire system.
This is the type of monolith that the dominant narrative attacks. But the problem is not the monolith itself: it's the poorly structured monolith where everything calls everything without clear boundaries. That type is often called "big ball of mud" and represents a problem of architectural discipline, not of technology choice.
The Modular Monolith
The Modular Monolith is still a single deployable artifact, but internally organized into modules with explicit boundaries. Each module has its own area of the database, its own public API to other modules, and controlled internal dependencies. Modules don't call each other freely: they communicate through defined contracts, just as microservices would, but without network overhead.
This is the type of monolith that should almost always be considered before deciding whether microservices are needed. It makes the codebase maintainable, responsibilities clear, and boundaries well-defined enough to allow eventual extraction of services in the future.
The distributed monolith (to avoid)
The distributed monolith is the situation that results from trying to do microservices without the right conditions. The system is physically distributed across multiple services, but those services are tightly coupled: they share the database, call each other synchronously in cascade, and must be deployed together.
The distributed monolith is not poorly implemented microservices. It's a separate architecture, genuinely worse than the traditional monolith for most purposes.
What a microservice is: correct definition and domain boundaries
A microservice is an autonomous service responsible for a specific bounded context of the domain, which can be deployed, scaled, and developed independently from the other services in the system. The key word is not "micro" meaning "small": it's independent.
A service is a true microservice if it meets these three criteria:
- Independent deployment: it can be released to production without coordination with other teams or other services.
- Independent scalability: it can be scaled horizontally autonomously when it receives load spikes, without having to scale everything else.
- Independent development: a separate team can work on it without depending on other teams, with the possibility of using different technology stacks if needed.
The boundary of a microservice should correspond to a bounded context of the domain according to Domain-Driven Design principles. It's not about cutting the system by technical functions (database, API, presentation), but by domain concepts: Orders, Inventory, Payments, Notifications. Each bounded context has its own language, its own data, its own business rules.
A common mistake is creating microservices that are too small: services that deal with single database entities instead of complete bounded contexts. This generates what is called "nanoservice architecture," where network calls between services become so frequent as to far exceed any benefits of independence.
For a deeper look at how bounded contexts translate into concrete architecture, the article on software architectural patterns provides useful context on separation of responsibilities at the design level.
5 concrete signals that indicate when to switch to microservices
Microservices are not chosen for philosophical reasons or because "that's what modern tech companies do." They are chosen when there are concrete signals in your project that indicate a real problem that only microservices solve better than the monolith.
Signal 1: multiple teams constantly blocking each other
If you have 4 teams working on the same codebase and constantly getting in each other's way, with daily merge conflicts, deployments blocked because one team isn't ready, and continuous meetings to coordinate releases, you are paying the organizational cost of the monolith without the benefit of its simplicity. Conway's Law says that software architecture tends to mirror organizational structure. If your organization is distributed, your architecture should be too.
Signal 2: radically different scalability requirements between components
If the search component receives 500 requests per second while the admin area receives 2, and these differences are stable and predictable, extracting the search engine as a separate service makes economic and operational sense. But note: this signal is only valid when scalability requirements are genuinely and radically different, confirmed by production data, not hypothetical.
Signal 3: incompatible technologies needed for different components
If you need Python with TensorFlow for the machine learning component, .NET for the main backend, and maybe Go for a high-concurrency component, microservices are the natural answer. You can't natively integrate a Python module within a .NET monolith. Service boundaries allow technological freedom where it's genuinely needed.
Signal 4: regulatory or compliance isolation
In sectors like finance, healthcare, or telecommunications, some functionalities must be physically isolated for compliance reasons. The payment system must be PCI-DSS certified separately. Clinical data must be isolated based on specific GDPR regulations. In these cases, microservices are not an optional architectural choice, but a concrete regulatory requirement.
Signal 5: differentiated resilience with failure isolation
If a component must continue to function even when other components of the system are down, microservices allow failure isolation. In a monolith, if the process crashes due to a bug in one module, the entire system goes offline. With microservices, a crashing service doesn't bring down the others.
5 signals that indicate the monolith is the right choice (and will remain so)
There are equally concrete signals that indicate the monolith not only works today, but will continue to work well in the foreseeable future of your project.
Signal 1: small or single team
With fewer than 10-15 developers working on the same system, microservices add complexity without organizational benefits. The coordination overhead between teams that justifies microservices doesn't exist. A single team with a well-structured monolith can move much faster than a team that has to manage separate pipelines, API versioning, and service discovery for every module.
Signal 2: evolving and unstabilized domain
If you are building a new product and the domain is not yet clear, dividing into microservices early is risky. Bounded contexts will change as domain understanding develops. Renegotiating boundaries between microservices in production is painful and expensive. The monolith allows refactoring boundaries at nearly zero cost as long as the code isn't distributed.
Signal 3: team without distributed operations experience
Managing a microservices system requires specific skills: Kubernetes or an equivalent orchestrator, service mesh like Istio or Linkerd, distributed tracing with Jaeger or Zipkin, partial failure management with circuit breakers, distributed configuration. If the team doesn't have this experience, the cost of acquiring it in the context of a production system is enormous.
Signal 4: limited operational budget
Microservices cost more to operate: more CI/CD pipelines, more staging environments, more computing resources for containers, more inter-service network costs. For a startup or mid-sized software company, these costs are not negligible.
Signal 5: system with strong transactionality requirements
If the domain requires frequent distributed transactions, as in ERP systems, accounting, or multi-step approval processes, the monolith greatly simplifies consistency management. ACID transactions within a single process are trivial. In a distributed system they require Saga pattern, compensating transactions, and eventual consistency management.
The Modular Monolith: the solution 70% of teams should choose
Between the traditional monolith and microservices there is a middle ground that solves most real problems with a fraction of the complexity: the Modular Monolith. It's the solution that 70% of teams should choose but often ignore because it doesn't make news at conferences.
A Modular Monolith is a single deployable artifact internally divided into modules with explicit boundaries. Each module has:
- Its own area of the database (separate schema, not a separate database)
- Its own public API to other modules, typically defined as C# interfaces or internal events
- Its own internal dependencies, not shared with other modules
- Boundaries that prevent direct access to other modules' internal entities
How to implement a Modular Monolith in .NET
In a .NET project, the Modular Monolith is typically implemented with a structure of separate projects within the same solution. Each module is a project with its own domain classes, its own infrastructure, and a public contracts project that exposes only what other modules can use.
Communication between modules can happen in two ways: through direct calls to public interfaces (for synchronous operations), or through an in-process event bus (for asynchronous operations and to reduce coupling). Libraries like MediatR allow implementing an internal event bus with minimal overhead.
The advantages of the Modular Monolith over microservices
The Modular Monolith retains all the operational benefits of the monolith: a single artifact to deploy, simple ACID transactions, linear debugging, no network overhead. At the same time, it provides separation of responsibilities, explicit boundaries between areas of the system, and the ability to extract modules into separate services in the future when conditions justify it.
The Modular Monolith is not a temporary solution. It's a fully legitimate architecture that can sustain complex systems for years, and has become the recommended default choice for many of the most authoritative software architects in the field.
The hidden costs of microservices that nobody calculates before starting
The cost of microservices is almost always underestimated. Conference slides show the benefits: scalability, team independence, resilience. They never show the real operational bill. Before choosing microservices, these costs need to be explicitly budgeted.
Infrastructure and orchestration
Microservices require a container orchestration system, typically Kubernetes. Kubernetes is a powerful but complex tool: it requires specific skills for setup, configuration, monitoring, and maintenance. For a team of 5, having at least one person managing the Kubernetes cluster is a fixed cost. On managed cloud (AKS, EKS, GKE), the economic cost of the cluster is additional to computing costs.
Observability and distributed tracing
In a monolith, a log file and a debugger are enough for most investigations. In a microservices system, tracing a request that crosses 5 services requires distributed tracing (OpenTelemetry, Jaeger, Zipkin), log correlation across different services, and metrics for each service. Stacks like Prometheus with Grafana, or SaaS solutions like Datadog, add technical complexity and non-negligible economic costs.
Service mesh and secure communication
To manage communication between services securely and reliably, many microservices teams end up adopting a service mesh like Istio or Linkerd. These tools manage mTLS between services, advanced load balancing, circuit breaking, and retry policies at the infrastructure level. But they have a significant learning curve and add overhead to deployments.
Integration and contract testing
In the monolith, integration tests test components within the same process. In microservices, integration tests must manage separate services, with options for mocks, stubs, or service virtualization. Contract testing (with tools like Pact) becomes necessary to guarantee that contracts between services are respected when teams change APIs. This adds an entire testing discipline that doesn't exist in the monolith.
Partial failure management
In a distributed system, a call to a service can fail for a thousand reasons: network timeout, service temporarily unavailable, overload. Managing these partial failures requires patterns like circuit breaker, retry with exponential backoff, bulkhead, and explicit design of every inter-service interaction in a fault-tolerant way. This complexity doesn't exist in the monolith.
Empirical studies indicate that the operational cost of a microservices system is 3-5 times that of an equivalent monolith. If your project doesn't have the signals that justify this additional cost, you're paying a very high price for complexity you don't need.
For a broader view of architectural choices and their impacts, the article on Clean Architecture in C# provides useful context on how to organize code regardless of the choice between monolith and microservices.
Strangler Fig Pattern: how to migrate from monolith to microservices without rewriting everything
If you are in the situation where you have an existing monolith and you have the concrete signals that justify extracting microservices, the Strangler Fig pattern is the safest way to do it without rewriting the system from scratch.
The name comes from a type of tropical fig tree (Ficus aurea) that grows wrapping around a host tree, slowly replacing the host tree's structure until the host tree dies and only the fig remains. The software pattern works analogously: new services grow "around" the monolith, intercepting traffic, until the original monolith has been gradually replaced.
How the Strangler Fig pattern works
The practical implementation happens in four phases:
Phase 1: identify the bounded context to extract. Don't extract functionality at random. Start with the bounded context that has the strongest signals for extraction: the one with the most different scalability requirements, the one causing the most merge conflicts between teams, or the one with specific compliance requirements.
Phase 2: create the new service. Build the new service in parallel with the monolith, with its own infrastructure, its own database, its own CI/CD pipeline. Don't replace the code in the monolith: duplicate it in the service until the service is ready for production traffic.
Phase 3: insert a facade/proxy. Add a routing layer (typically an API Gateway or a configurable proxy like YARP in .NET) in front of the system. This layer decides whether to route requests to the monolith or the new service, allowing gradual rollout and immediate rollback capability.
Phase 4: migrate traffic progressively. Start with 1-5% of traffic to the new service. Monitor carefully. If everything goes well, increase progressively. Only when the service has handled 100% of production traffic for a sufficiently long period, remove the corresponding code from the monolith.
The necessary conditions for successful migration
The Strangler Fig pattern works well only if the monolith has sufficiently clear boundaries to identify what to extract. If the monolith is a big ball of mud where everything depends on everything, migration is nearly impossible without a preliminary phase of internal refactoring.
This is why investing in a Modular Monolith from the start has strategic value: you're building boundaries that can be used as extraction lines if and when it becomes necessary.
Service mesh, API gateway, service discovery: the infrastructure microservices require
One of the most underestimated aspects of microservices is the infrastructure needed to make them work correctly in production. Each component of this infrastructure solves real problems, but adds complexity and requires specific skills.
API Gateway
In a microservices system, external clients (frontend, mobile apps, third-party integrations) should never directly call internal services. An API Gateway is the single entry point that handles: routing requests to the appropriate services, authentication and authorization, rate limiting, request transformation, and aggregation of responses from multiple services.
In the .NET and Azure ecosystem, the most common options are Azure API Management, Ocelot (open source .NET), and YARP (Yet Another Reverse Proxy, developed by Microsoft). Each has a different profile for complexity and features.
Service Discovery
In a dynamic system where containers scale and move, services cannot communicate with each other using fixed IP addresses. Service discovery is the mechanism that allows services to find each other: when a service starts, it registers itself in a central registry (Consul, etcd, or Kubernetes' internal DNS service). When another service needs to communicate with it, it resolves the current address through the registry.
Service Mesh
A service mesh is an infrastructure layer that manages all communication between services: mTLS for secure communication, load balancing between instances, circuit breaking, retry logic, and communication observability. The most well-known options are Istio and Linkerd. They add overhead but shift the responsibility for resilience from the application to the infrastructure.
Case studies: when microservices saved the project and when they sank it
Concrete case studies are more informative than any abstract principle. These two scenarios are representative of patterns that repeat frequently in the software development market.
Case 1: the e-commerce platform that needed microservices
A mid-range e-commerce platform reached a critical point: the development team had grown to 25 people, divided into 4 functional teams (product catalog, orders, payments, logistics). The original monolith, well-structured, had become an organizational bottleneck: every release required coordination between all 4 teams, merge conflicts were frequent, and checkout during peak periods (Black Friday) needed to handle 50x normal traffic while the catalog handled it comfortably.
The decision to first extract the checkout service as a separate microservice, then the catalog search engine, allowed these two functions to scale independently during peak periods. The rest of the system remained as a Modular Monolith. The result is a hybrid architecture where microservices exist only where signals justify them.
Case 2: the software company that lost 18 months on the wrong architecture
A software house convinced a manufacturing sector client to rewrite their legacy management system as a microservices system, presenting it as the "modern choice." The team was 6 developers, none with previous experience with Kubernetes or distributed systems. The domain had strong transactionality requirements between modules (orders, warehouse, invoicing are tightly coupled by definition).
After 18 months, the system was in production but with serious problems: high latency for operations crossing multiple services, data consistency issues in multi-step operations, and an operational cost on Azure that was 4 times the planned budget. The resolution was a partial consolidation of services into a Modular Monolith, with only the reporting and third-party integration modules remaining as separate services.
The lesson: architecture must follow the real problem, not the technology trend. In this case, a Modular Monolith would have provided all the necessary benefits without the costs that nearly sank the project.
How to make the decision in your team: a practical decision framework
A practical decision framework is based on three key questions to answer honestly based on the current situation of the project, not on hypothetical future scenarios.
Question 1: how many teams need to deploy independently?
This is the most important question. If you have a single team or teams that work in a coordinated manner, the monolith (or Modular Monolith) is almost always the correct choice. If you have 3 or more teams that must be able to release autonomously without waiting for each other, microservices begin to have organizational sense.
Question 2: do you have components with genuinely different scalability requirements today?
"We might need to scale X in the future" is not a valid signal. The valid signal is: "Component X today has resource requirements 10x different from other components, and this is causing economic waste or performance problems." If this is true for some specific components, extracting only those as services makes sense. Not everything.
Question 3: does the team have the skills to manage distributed systems?
This question needs to be asked honestly. Managing Kubernetes in production, implementing distributed tracing, designing for partial failure, managing eventual consistency: these are specific skills that take time to acquire. If the team doesn't have them, the cost of learning them in the context of a production system is high and risky.
If you answer "no" to the first two questions, start with the Modular Monolith. If you answer "no" to the third, start with the Modular Monolith regardless of how you answer the first two. You can always extract services later when you have the real signals, skills, and necessary infrastructure.
Becoming a Software Architect means knowing how to do exactly this: making architectural decisions based on evidence, not trends. If you want to develop this skill systematically, the article on how to become a Software Architect provides a concrete path.
Knowledge of classic architectural patterns is also fundamental for this skill: the article on software architectural patterns covers the building blocks found in all well-designed distributed and monolithic systems.
Domande frequenti
Microservices make sense when different teams need to release parts of the system independently, when different components have radically different scalability requirements, or when the system is large enough to justify the additional operational complexity. Team size is the most reliable signal: with fewer than 15-20 developers, a monolith is almost always the right choice.
Not necessarily. A well-written monolith can scale horizontally across multiple instances and handle high loads. Microservices allow individual components to scale independently, but this granularity only makes sense when bottlenecks genuinely differ per component. In most systems, the database is the primary bottleneck, and microservices don't automatically solve that problem.
A Modular Monolith is a monolithic architecture where the code is divided into modules with explicit boundaries and controlled dependencies, but is deployed as a single artifact. It's an intelligent compromise that provides separation of concerns without the operational complexity of microservices. It allows services to be extracted in the future if needed, starting from already well-defined boundaries.
Yes, and it's the most common path. The Strangler Fig pattern allows you to gradually extract functionality from the monolith into new services without rewriting everything from scratch. The fundamental condition is that the monolith has clear enough boundaries to identify what to extract. A big ball of mud monolith is nearly impossible to migrate safely.
Hidden costs of microservices include: Kubernetes or equivalent infrastructure, service mesh for secure communication, distributed tracing and observability, partial failure management with circuit breakers, network latency between services, integration and contract testing complexity, and distributed deployment overhead. Empirical studies suggest the operational cost of a microservices system is 3-5x that of an equivalent monolith.
