Unit testing in .NET to write code that doesn't let you down
Matteo Migliore

Matteo Migliore is an entrepreneur and software architect with over 25 years of experience developing .NET-based solutions and evolving enterprise-grade application architectures.

He has led enterprise projects, trained hundreds of developers, and helped companies of all sizes simplify complexity by turning software into profit for their business.

There's a phrase that, if you work in the .NET world, gives you a pang in your stomach every time you hear it said out loud.

A sentence that is not an excuse, but a confession: "It worked on my machine."

This is not just a figure of speech; it's the signal that something in the way you work is not under control, even if you pretend nothing happened.

It's the exact moment when you understand that the code you wrote doesn't really belong to you, because all it takes is changing its context to make it collapse.

If you're honest with yourself, you know that that uncomfortable feeling doesn't come from the bug itself.

It arises much earlier, when you decide to release with that tension in your stomach, hoping that no one touches that delicate part of your system.

It arises when you accept a hastily done refactoring because "there is no time to test everything" or when you deliberately avoid certain files because no one really knows what they do, and you'd rather not find out the hard way.

This state of mind is the direct consequence of a way of working based on hope rather than control.

Write code that works today, but you have no guarantee what will happen tomorrow.

Every change becomes a gamble, every release a small act of faith.

And this is where work stops being stimulating and starts to become exhausting, because you constantly live with the fear of breaking something that you wouldn't know how to fix quickly.

This article was created for those who are tired of working like this.

For those who feel that, despite the experience accumulated, they still lack a solid foundation on which to rely.

For those who have already experienced inexplicable regressions, trivial bugs discovered by QA or, worse, by the end customer.

For those who have sacrificed weekends and evenings to fix problems that he would never have had to face in production.

Here we will not talk about tests as an academic exercise or as a technical trend to follow because "everyone does it".

We will talk about professionalism, control and responsibility, how to stop entrusting your reputation to luck and start building a way of working that allows you to intervene on the code with clarity, even under pressure.

If today you feel like you are walking on unstable ground, if every change seems like a risk and not a conscious choice, this is not your personal limit.

It is the symptom of an incomplete process.

And it is exactly from here that we must start to restore order, method and serenity in your daily work.

What is Unit Testing and why is it important in .NET

Clear path to writing robust unit tests in .NET.

In the panorama of modern software development, Unit Testing it is not an optional best practice nor a quirk of particularly scrupulous developers.

It is the clear line between those who write code hoping that it will hold up and those who build software knowing exactly what will happen when it is run, modified or extended over time.

In the .NET environment this distinction is even more evident, because most real projects are not born small and tidy, but grows rapidly until it becomes a complex, stratified and difficult to keep under control system.

Every untested method is an implicit promise.

You're telling yourself, your team, and the company that that piece of code will always do the right thing, in every future context.

The problem is that this promise is not verified.

When something stops working, you don't know if the problem is there, elsewhere or in the interaction between multiple components.

It is at that moment that the fear of touching the code arises, that subtle sensation that pushes you to postpone, to find a temporary remedy, to avoid structural changes.

Unit Testing serves exactly to break this mechanism.

Writing a unit test means stating it explicitly how a single unit of code should behave.

It means turning a hypothesis into a testable rule.

If the behavior changes, the test fails.

There is no interpretation, there is no ambiguity, there is no room for “maybe”.

There is a precise signal that indicates when the code stops meeting expectations on which it was based.

In .NET this approach is critical because complex enterprise projects are by nature interconnected.

A business service can depend on repositories, external services, environmental configurations, and shared logic.

Without testing, every change becomes a gamble with unpredictable ripple effects.

With tests, however, each change is accompanied by immediate feedback that allows you to understand if you are improving the system or if you are introducing a risk.

Taking Unit Testing seriously isn't just about catching bugs before production, although that alone would have enormous value.

It means improving the intrinsic quality of the software.

Testable code is necessarily more modular, more decoupled and more readable.

It is code designed to evolve without fear, and it is this ability for controlled evolution that distinguishes those who write working code from those who build professional software.

Introduction to xUnit for testing in .NET

Strategic approach to unit testing in .NET with xUnit.

When you decide to stop relying on intuition and start looking for real control over your code, choosing which tool to use, it becomes a strategic decision, not a technical detail.

xUnit didn't become the de facto standard in the .NET ecosystem for no reason.

It has established itself because it embodies a precise philosophy, designed for modern development and for those who work on projects that must stand up over time, under pressure and with a growing team.

Unlike more historical frameworks, xUnit does not try to be accommodating, it does not offer you reassuring, but often dangerous, shortcuts.

It is designed for push you towards better practices almost without you realizing it.

One of its most significant choices is the absence of global setup and teardown mechanisms, which in other frameworks favor coupled and fragile tests.

Here isolation is not an option, it is an implicit rule.

This approach results in a number of features that distinguish xUnit from many other testing frameworks:

  • Each test runs in a new instance of the class, avoiding involuntary state sharing
  • There are no global mechanisms that mask addictions or side effects
  • The test lifecycle is explicit and predictable
  • Dependencies must be declared and injected, not hidden
  • Errors emerge quickly, instead of remaining latent until advanced stages of the project

And it is precisely from here that we understand why xUnit imposes such a rigorous approach.

It forces you to reckon with the state, eliminate hidden addictions, and stop relying on uncontrolled side effects.

If a test works only because another was run first, xUnit it immediately puts you in front of the problem, without compromises.

Another aspect that makes xUnit central to the modern .NET world is its natural integration with real work tools.

Works seamlessly with Visual Studio, the .net CLI, and continuous integration pipelines.

This means the tests you write today they are the same ones that will be performed tomorrow on a build server, in a container or in the cloud, without special adaptations or configurations.

Choosing xUnit means aligning yourself with how the .NET community that builds frameworks, platforms and enterprise applications works.

It is a sign of technical maturity, but above all of mental maturity.

It indicates that you are not trying to “pass tests,” but to build a system that is reliable, predictable, and under control, even as complexity increases.

If reading this you recognized yourself in that feeling of uncertainty every time you touch the code, know that It's not your limit.

It's a sign that you're working without a system that can protect your software.

In .NET course you learn to build a solid, testable, scalable foundation so you can stop “hoping it works” and start knowing why it doesn't work.

It is the step that transforms development from continuous risk to controlled process.

Writing unit tests with xUnit: structure and methods

Arrange Act Assert structure for robust unit tests in .NET.

Writing effective unit tests doesn't mean filling your project with methods that test for trivialities.

It means to adopt a more rigorous way of thinking compared to what often guides the first draft of the code.

With xUnit this mentality emerges naturally, because the framework pushes you to organize each test as a controlled experiment, with clear boundaries and explicit intentions.

The most common structure, known as Arrange, Act, Assert, is not a stylistic convention in itself.

It's a mental pattern that forces you to separate data preparation from action execution and verifying the result.

When this separation is respected, the test becomes immediately readable, even for those who have not written a single line of that code.

In the preparation phase, define the initial context that you consider relevant.

You are stating which conditions must be true before the code runs.

This step is fundamental because it allows you to clarify what the code implies, which would otherwise remain implicit in the code.

In the execution phase you isolate the action under test, often a single method call, avoiding mixing logic and preparation.

Finally, in the verification phase, you become aware of the expected result and compare it with what actually happened.

A test structured in this way reads like a short story.

Anyone who opens it in six months doesn't have to interpret anything.

He immediately understands what problem you were facing, what you did and what result you thought was correct.

The direct benefit is one decrease in perceived complexity and a test suite that documents system behavior.

xUnit provides a set of expressive assertions that go beyond simple equality.

Using them carefully is an integral part of the job.

A test that fails with a generic message doesn't help anyone.

A test that clearly explains what you expected and what you got immediately guides you to the cause of the problem.

It is this attention to clarity that distinguishes a test written to "make up the numbers" from a test written to truly protect the code.

Write tests this way It's not extra work.

It is an investment that reduces anxiety, increases trust and allows you to intervene on the code with clarity, even when the system grows and the consequences of an error become increasingly significant.

Mocking and isolating dependencies in tests

Mocking and dependency isolation in .NET unit tests.

The point at which many developers start to hate testing is almost always the same: when they fail for reasons that have nothing to do with logic that they wanted to verify.

An unreachable database, a slow connection, an external service that responds unpredictably.

At that moment the test stops being a safety net and becomes an annoying obstacle, something that slows down rather than helps.

This is where the concept of isolation comes into play, which is the heart of Unit Testing done well.

A unit test should never depend on the outside world.

In practice, the most common problems that make tests fragile almost always arise from uncontrolled external dependencies, such as:

  • Direct access to real databases or shared environments
  • Calls to external services non-deterministic
  • Using times, dates or configurations dependent on the environment
  • Internally instantiated objects rather than provided externally
  • Mixed business logic with infrastructure code

Mocking is the technique that allows you to regain control.

By replacing a real dependency with a controlled stand-in, you can decide how that component behaves.

You can simulate correct answers, errors, exceptions or borderline situations without having to actually reproduce them.

In this way the test becomes fast, deterministic and repeatable, regardless of the environment in which it is performed.

But the true value of mocking is not just operational.

It's architectural.

In order to simulate an addiction, you must first make it replaceable.

This forces you to design code more cleanly, using interfaces, dependency injection and well-defined boundaries.

If a class directly instantiates its dependencies, it cannot be isolated and if it cannot be isolated, it will hardly be maintainable over time.

In this sense, testing stops being a post-development phase and becomes a guide to design.

It forces you to separate responsibilities, reduce coupling, and make relationships between components explicit.

This is how the code goes from being simply functional to truly being in control, even when complexity increases.

If your tests today are brittle, slow, or difficult to maintain, the problem isn't the framework: it's how you're designing the system.

In .NET course you learn to design truly testable code, where each member has clear responsibilities and no unnecessary dependencies.

This is where testing stops being a burden and becomes a strategic tool for writing software that grows without collapsing.

Manage the lifecycle of objects in unit tests

Managing the unit test lifecycle in .NET.

One of the most frustrating problems that emerge when a test suite begins to grow is tests behaving unpredictably.

They pass when executed individually, but they fail for no apparent reason when they are all thrown together.

This type of behavior quickly undermines trust in tests and leads many developers to disable or ignore them, completely defeating their purpose.

The cause, in most cases, is incorrect management of the object life cycle.

When multiple tests share state, resources, or implicit data, a coupling is created that it makes the whole suite feel flimsy.

Static variables, temporary files not cleaned, contexts reused without control.

All elements that introduce invisible dependencies between tests that should be completely independent.

xUnit tackles this problem head on by creating a new instance of the test class for each method.

This choice eliminates many sources of error at the root and forces you to think of each test as an autonomous unit.

When the outcome of a test depends on the execution order, the framework makes evident a hidden dependency that must be resolved.

In the real world, however, there are situations where you can create some resources from scratch for each test it would be too expensive.

Complex contexts, in-memory databases, supporting infrastructures require a more pragmatic approach.

This is where the xUnit fixtures come into play, which allow you to share resources in a controlled way, while still maintaining the logical isolation of the tests.

Managing the life cycle isn't just about creating and using, it's also about knowing how to properly close what you've opened.

Implementing explicit teardown mechanisms is not a formality, but a responsibility.

It would be like in a laboratory or a professional kitchen, leaving residues that compromise the final result.

In tests, those residues they turn into instability, wasted time and mistrust.

Those who master these aspects demonstrate that they do not just write tests, but that they know how to design a true verification infrastructure that is reliable over time.

We run unit tests in .NET with Visual Studio and dotnet CLI

Running .NET unit tests with Visual Studio and CLI.

Writing unit tests is useless if they don't become part of your daily workflow.

A test that remains stuck in the project, performed only occasionally or launched “when there is time”, does not protect anything.

The true power of testing emerges when tests are run often, easily and without friction, until they become an automatic reflection of the way you work.

In the .NET environment you have two complementary tools at your disposal, designed for different moments but both essential.

On the one hand there is the Visual Studio graphical environment, which offers an immediate view of the state of the test suite.

On the other hand is the dotnet CLI, which represents industrial efficiency and automation.

Know how to use both with ease it is a distinctive feature of the professional developer.

In practice, each of the two tools covers different and complementary needs:

  • Visual Studio is ideal for analyzing tests, debug and understand execution flows
  • The CLI is perfect for automating, integrate CI into processes and maintain consistency across environments
  • Using them together allows you to move on from reasoning naturally local to automated verification

Learning to use the CLI also means freeing yourself from dependence on the IDE.

You can filter tests, run them in parallel, collect metrics, and automate checks before each commit.

This level of control turns testing into an ongoing discipline, not in an occasional activity.

This is how the phrase "it worked on my machine" definitively disappears and is replaced by a repeatable, verifiable and professional way of working.

How to test ASP.NET Core applications

Testing ASP.NET Core applications in controlled environments.

Testing ASP.NET Core applications requires a change in mindset compared to testing pure class libraries.

Here you are no longer working with isolated, self-contained code, but with an abstraction-rich framework, HTTP pipeline, middleware and dependency injection mechanisms.

Without the right approach, the risk is turning tests into something slow, fragile and expensive to maintain.

The most common mistake is trying to test everything by going over the network, starting the application and making real HTTP calls.

This approach, often justified as “more realistic”, it actually leads to slow and unstable tests that are difficult to run frequently.

Each test becomes a small release, each execution a useless wait.

The result is that tests stop being used, just when they are needed most.

The professional approach is different.

ASP.NET Core is designed to be testable from the ground up.

Controllers and services can be treated as simple C# classes, instantiated directly and tested by isolating their dependencies.

This way you test your presentation and business logic without involving the web server, database or network infrastructure.

True competence, however, lies not only in knowing how to write these tests, but in understanding what to test and what to ignore.

There is no point in verifying that the framework serializes a JSON correctly or that the routing works as documented.

That is code already tested and maintained by Microsoft.

However, it has enormous value test how your code reacts to invalid input, domain errors or edge conditions.

For scenarios where you need to verify integration between multiple components of the HTTP pipeline, ASP.NET Core provides tools that allow you to launch the application in memory, without opening real ports.

These tests are slower than pure unit tests, but they offer additional security when they are really needed.

Knowing how to balance these levels of testing is an architectural skill that distinguishes those who copy examples from those who build reliable systems.

If you're still testing your applications as if they were isolated scripts, you're leaving reliability and time on the table.

In .NET course you learn how to test real applications, with complex pipelines, dependencies and flows, without turning every change into a risk.

This is where testing stops being theory and it becomes a tool that speeds up work of your team.

Test asynchronous code with xUnit

Async tests in .NET with properly handled async and await.

In the modern .NET world, asynchronous programming is no longer the exception, but the norm.

Database access, network calls, file operations, and external services make constant use of async and await.

This model improves scalability and efficiency, but it also introduces a complexity which tests the strength of your skills, especially when it comes to testing the behavior of the code.

A superficially written test can give a false sense of security.

If an asynchronous method is invoked without being properly expected, the test may terminate before the operation is completed.

In these cases the test passes, but it didn't really verify anything.

It is one of the most insidious errors, because it does not produce obvious failures, but leaves bugs uncovered that will only emerge in production, under load.

xUnit handles this scenario naturally, allowing test methods to be asynchronous themselves.

Returning a component and using await in the test allows the framework to wait for the operation to complete before proceeding with checks or with the cleaning phase.

This eliminates entire categories of hard-to-diagnose errors and makes the test behavior consistent with that of real code.

The real difficulty emerges when asynchronous dependencies come into play.

Correctly simulating a service that returns a task requires care.

A simulated component that does not return a completed operation, or that blocks the execution flow incorrectly, can cause hangs or tests that never finish.

Here we see the difference between those who use asynchrony out of habit and those who really understand its mechanisms.

Correctly testing even cases of error and deletion is a further sign of maturity.

Verifying that an asynchronous method throws the right exception or honors a CancellationToken means designing your code with real, not ideal, scenarios in mind.

This level of attention not only makes the tests more robust, makes the whole system more reliablepredictable and ready to handle complex loads and operating conditions.

Practical example: Writing tests for an order management application

Testing business logic in ASP.NET Core applications.

To really understand how much testing changes the way you work, you need a scenario that doesn't look like a calculator, but is a little more complex.

Imagine the beating heart of an e-commerce: an order management service.

Here an error is not a technical detail, it is money that doesn't come in, customers that abandon, tickets that explode, reputation that is damaged.

It's the kind of code where the "works on my machine" isn't just awkward.

It's unacceptable.

Let's say your OrderService has a PlaceOrder method.

A seemingly linear flow, but full of traps.

In a real-world scenario, such an operation involves multiple distinct responsibilities, each of which must be independently verified:

  • Check availability of the requested products
  • Correct price calculation, discounts and promotions
  • Payment management and possible failures
  • Consistent order saving in the correct state
  • Sending notifications or subsequent events

Each of these steps represents a point of possible failure, and for this reason must be isolated, controlled and verified independently.

The first step is to isolate.

You don't want a database, you don't want a gateway, you don't want to send emails.

You want to test the service logic, not the infrastructure.

Here mocking becomes your leverage.

You set up a mock warehouse that responds “available,” a mock gateway that accepts payment, and a repository that records the save call.

In the test, run PlaceOrder and verify that the result is a success and the order is processed.

This is the classic ideal case, but it's just the beginning: the real value emerges when you start testing failures.

If the warehouse has no stock, the order must be blocked before arriving at payment.

If payment is refused, the order must not be saved.

If sending the confirmation fails, you need to decide what happens: the order remains valid and the notification will be retried, or the entire process should revert to its previous state.

Each test in these scenarios isn't just testing a condition, it's turning an implicit choice into an explicit rule.

And then there are discounts, the point at which many systems silently break.

A parameterized test allows you to switch between different combinations of products, quantities and conditions and verify that the total is correct.

Here you are not simply “checking a number”.

You're locking down a business rule.

If the commercial policy changes tomorrow, the tests become the first alarm that forces you to intervene consciously.

When you get to this level, something important happens.

You stop hoping that everything is fine, you start showing it.

And that difference isn't just seen in the code.

It shows in the way you talk to the team, the way you refactor, the way you release.

This is where the testing comes in becomes a competitive advantage personal, not an extra technique.

If you haven't closed the page and have read this far, it means that you have recognized yourself in at least one of the situations described.

Not because your team isn't capable, but because you are experiencing first hand what happens when a system grows without clear direction and each intervention becomes more expensive than the previous one.

At this point the question is no longer whether something needs to be done, but how to do it without making things worse.

The point is not to add another tool, nor to chase yet another technological fad.

It means regaining control of software as a business lever, transforming what today absorbs time, energy and budget into a predictable, governable and profitable asset.

This is exactly what the .NET course : aims to help you build a system that stands the test of time, that doesn't depend on individuals and that allows you to make decisions based on data, not intuition.

If you recognize your context in what you just read, it means you are at the right time to take a step forward.

Not because it's easy, but because continuing like this already has a cost that you are paying every day.

Leave your details, you will be contacted to arrange a call with one of our tutors.

After a few days, compatible with your needs, you will see each other in a call, in which it will help you understand how our course works and together you will evaluate whether it is the right path to bring control, clarity and sustainability back to your software.

It's not for everyone.

But if you're looking for a concrete way to stop chasing problems and start managing them, this is where you start.

At this point the choice is simple: continue to manage the consequences or put the control back where it belongs.

It's up to you to decide.

Leave your details in the form below

Matteo Migliore

Matteo Migliore is an entrepreneur and software architect with over 25 years of experience developing .NET-based solutions and evolving enterprise-grade application architectures.

Throughout his career, he has worked with organizations such as Cotonella, Il Sole 24 Ore, FIAT and NATO, leading teams in developing scalable platforms and modernizing complex legacy ecosystems.

He has trained hundreds of developers and supported companies of all sizes in turning software into a competitive advantage, reducing technical debt and achieving measurable business results.

Stai leggendo perché vuoi smettere di rattoppare software fragile.Scopri il metodo per progettare sistemi che reggono nel tempo.