Legacy software modernization: practical guide
Matteo Migliore

Matteo Migliore is an entrepreneur and software architect with over 25 years of experience developing .NET-based solutions and evolving enterprise-grade application architectures.

He has led enterprise projects, trained hundreds of developers, and helped companies of all sizes simplify complexity by turning software into profit for their business.

This guide is part of the complete section on software modernization and migration to .NET.

Three years ago I got a call from the IT manager of a metalworking company in Brescia.

Their ERP had been running for seventeen years on a physical server hidden in a cabinet in the warehouse.

The only developer who understood the code was about to retire. Nobody knew what was really inside it.

The owner was afraid to touch it. The IT manager was afraid to touch it. The team was afraid to touch it.

The software worked. And that was exactly the problem.

"It works, don't touch it" is not a strategy. It's compound-interest debt that grows silently every month.

Maybe as you read this you're thinking: our system works, we don't have any urgent problems.

I know. That's exactly what everyone says before something goes wrong.

That's what the IT manager in Brescia said two months before the server in that cabinet went down in the middle of end-of-month shipping.

That's what the production director at a ceramics company in Sassuolo said three weeks before a missing integration stopped the line for two days.

Almost everyone says it. Until the day it's no longer true.

In twenty-five years working on real .NET systems I've seen this scenario hundreds of times.

Italian manufacturing companies with legacy systems holding up production but that nobody truly understands anymore.

In this article you'll find everything you need to determine whether your system is genuinely at risk, what it's costing you to do nothing, and which method to use to modernize it without stopping production.

I won't give you textbook theory. I'll give you what I've learned in twenty-five years on real projects, with real money and production lines that could not stop.

If you have a system that works but that nobody wants to touch, this article is for you.

What a legacy system really is and why it has nothing to do with the age of the code

Legacy systems in Italian manufacturing SMEs

The first thing to clarify is the definition. Because it's almost always wrong.

When I hear "legacy system" in companies, ninety percent of the time they mean "old system." Software built years ago, dated technology, ugly interface.

That is not the correct definition. And it's a distinction that changes everything.

The definition nobody ever gave you

A legacy system is not simply old software.

A system from 2010 can be legacy. A system from 1990 might not be.

There is only one determining factor: can you modify it with confidence?

If the answer is no, you have a legacy system. Regardless of the year it was built and the technology used.

A system is legacy when modifying it requires more fear than competence.

When nobody knows why certain architectural decisions were made.

When adding a simple feature takes weeks and leaves everyone holding their breath.

When the team uses phrases like "touch that part and you don't know what happens" or "we still haven't figured out why it works this way."

That is the signal. Not the year it was built.

A legacy system is not old by definition. It is a system you no longer truly control.

The five technical characteristics that identify the problem

To be precise, a system is legacy when it exhibits at least three of these five characteristics.

  • Zero automated tests: every change requires extensive manual testing and still leaves residual risk. Nobody knows for certain whether adding a feature breaks something elsewhere.
  • Absent or obsolete documentation: business rules live in the code or, worse, in a single person's head. The document that exists is from 2009 and has never been updated.
  • Unsupported technologies: .NET Framework 2.0, Visual Basic 6, Delphi, ASP.NET WebForms running on versions that haven't received security patches in years.
  • Extreme coupling between modules: touching the orders module breaks something in warehouse management, with no apparent connection. The code is a network of undocumented dependencies.
  • Dependence on a single person: one person truly knows how the system works. Everyone else knows how to use it but not how to understand it. When that person is absent, everything stops.

If you recognize three or more of these signs, you're reading the right article.

The paradox that locks companies in place: "but it works"

This is the most subtle and most important point.

The paradox of the working legacy system is this: the better it works, the less you want to touch it.

The less you touch it, the less anyone who comes after knows how it works.

The less someone knows how it works, the more you depend on the person who does.

It's a circle that tightens. Always tighter. Until something breaks, or the key person leaves, or an integration becomes urgent and impossible.

I've seen this scenario in companies with twenty employees and in companies with eight hundred.

The mechanism is identical. Only the scale of the damage changes when it manifests.

The fact that the system works is not a guarantee. It's a deferral of the problem, with interest.

Why Italian manufacturing SMEs are particularly exposed

In the 1990s and 2000s many Italian companies built bespoke software that worked perfectly for the needs of that moment.

ERPs, MRP systems, production control systems: written in Visual Basic 6, Access, Delphi or the early versions of ASP.NET WebForms.

Nobody ever replaced them because they worked. Today those systems carry the core of the business but are also the company's greatest technological risk.

The problem is not that the software is slow or ugly. The problem is that you can no longer truly control it.

You can't integrate it with the new ERP systems. You can't export data in the format your largest customer requires. You can't add the supplier portal you've been planning for three years.

Among the most common legacy system examples in Italian SMEs are ERPs in VB6, quality control applications on Access, internal portals on ASP.NET WebForms and MES systems built to order twenty years ago.

A system you cannot modify without fear is not an asset. It is a prison.

If your system still runs on Visual Basic 6, the VB6 to .NET migration course is the first concrete technical step toward full control of the code.

The real cost of doing nothing: what doesn't show up on the balance sheet

Hidden costs of unmodernized legacy software

The problem with legacy software is not that it's old. The problem is that every month that passes without action the cost grows in a non-linear way.

It's not a cost visible on the balance sheet. It's a cost distributed across lost hours, recurring bugs, impossible integrations and legal risks that mature in silence.

Let's make it concrete, with numbers and real cases.

The mathematics of accumulating technical debt

Technical debt grows like financial debt. But with one crucial difference: it doesn't appear on the balance sheet.

Nobody records the hours lost in manual testing. Nobody accounts for the development days wasted deciphering undocumented code.

Nobody budgets for the cost of a bug fix that generates three more in apparently unrelated modules.

Yet those costs exist. And they grow every year, even without doing anything.

I analyzed the maintenance costs of a legacy system at a ceramics company in Modena.

The estimate to add a production report: fifteen development days. In a well-designed modern system: two days.

Thirteen days of difference for every single feature. Multiplied by every change you make each year.

In that company, the hidden cost of the legacy system was approximately one hundred and twenty development days per year. Every year. For years.

Reducing software maintenance costs is one of the most immediate and measurable benefits of any modernization effort.

Security: the vulnerabilities nobody is fixing

.NET Framework 4.x still receives security patches, but earlier versions do not.

A system on .NET Framework 2.0 or .NET Framework 3.5 is exposed to publicly known vulnerabilities that have been open for years. Vulnerabilities with public CVE numbers that anyone can look up.

In a manufacturing company with the ERP connected to the corporate network, a single unpatched vulnerability is the entry point for a ransomware attack.

This is not a theoretical case. I worked with a food sector company that suffered an attack precisely through a legacy component with a CVE open for three years.

Production stopped for four days. Estimated damage: over three hundred thousand euros between lost production, system recovery and crisis management.

It was not a large company. It was an SME with eighty employees.

What struck me most was not the financial damage. It was that everyone had known for months that component was vulnerable. Nobody had decided to act because "the system worked."

The component kept working. It was the attackers' backdoor that was working silently.

GDPR and legacy software: a silent time bomb

Legacy systems are almost always non-compliant with GDPR. Not out of negligence: the regulation is from 2018 and the software is from 2005.

The problem is not intentional. The problem is structural.

A system built in 2005 was not designed to handle the right to erasure. It cannot export all data belonging to a specific data subject on request. It has no personal data access logs.

The typical problems I find in legacy systems at Italian SMEs are always the same.

  • Inability to delete a specific customer's data without touching related tables in undocumented ways.
  • Absence of personal data access logs: nobody knows who viewed whose data.
  • Processing consent not digitally tracked: only signed paper forms exist in physical archives.
  • Inability to demonstrate the legal basis for processing for each data category.
  • Personal data distributed across non-normalized tables with no index that allows full traceability.

The Data Protection Authority has already issued fines to Italian SMEs for technical inadequacy of their systems. This is not a future risk: it is a present risk.

Modernization resolves these issues structurally. Temporary fixes on a legacy system are not enough: the problem is architectural, not a matter of code.

The growing cost of routine maintenance

In a well-designed system, adding a feature takes hours.

In a legacy system with years of layered modifications made without method, the same feature takes weeks.

Every change requires that don't exist in a legacy system. So every change requires extensive manual testing. And still leaves residual risk areas.

Every bug fix generates others because the code is coupled in undocumented ways. You fix the problem in order management. Two days later you find an anomalous behavior in invoicing.

This is not a competence problem for the team. It is a structural problem with the system.

A good team on a legacy system produces less than a mediocre team on a modern system. It's not their fault.

With every passing year, the team's productivity curve on the legacy system falls. The cost per developer increases. Delivery timelines lengthen.

The best developers leave. They don't want to work on code they didn't write, don't understand, and can't improve.

The key person risk: the real human debt

In many Italian SMEs there is a single person who truly knows how the legacy system works.

Often it's the IT manager who built it. Often they're approaching retirement. Often they're also the only one who knows how to manage the physical server it runs on.

When that person leaves the company, they take with them years of knowledge that is not written down anywhere.

It's not knowledge that can simply be documented. It's not just how the code works. It's why certain choices were made. Which exceptions are handled in that particular way. What happens if you touch that parameter.

I've seen companies spend months in "knowledge transfer" sessions with the outgoing key person.

The exercise produces documents. It doesn't truly transfer the knowledge. Because much of the knowledge about a legacy system is not verbalisable: it's tactile. You know it because you've touched the system thousands of times.

The key person risk is not managed with documentation. It is managed by modernizing before it's too late.

The impossible integration that cuts you out of the market

The market moves. Large customers request EDI integrations for order management. Your main supplier wants an online portal. The bank proposes a direct link for bank reconciliation.

In a modern system, building an integration takes weeks. In a legacy system, it takes months. It's often impossible without touching the heart of the code.

The result is that you build workarounds: shared Excel files, manual copies of data between systems, nightly exports written by third parties that nobody understands anymore.

Every workaround is a new point of risk. A new place where data can diverge. A new process that depends on someone who knows how it's done.

I've analyzed systems where the data flow between the ERP and the shipping system ran through three intermediate Excel files, two VBA macros written ten years ago and a nightly job that nobody knew exactly where it ran.

Every time something went wrong in that chain, finding the problem took hours. Often days.

The signals you're ignoring: the inner dialogue every IT manager knows

Before explaining how to modernize, I want to pause for a moment.

I want to talk to you about an internal dialogue I know well. I hear it every time I sit down with an IT manager for the first time.

It's the dialogue between what you see clearly and what you prefer not to look at too closely.

"We have people who know the system"

How many?

If the answer is two, the situation is fragile. If the answer is one, you already have a serious problem.

If that person is sick for a week, what happens? If they decide to leave with two weeks' notice, what do you do?

"We had them write a document." I know. I've seen it dozens of times. That document describes what the system does, not how the code actually works.

The knowledge of a legacy system is not transferred through a document. It is transferred through months of working side by side on the code.

If you don't have that structured continuity, the knowledge leaves when the person leaves.

"We've been working on the migration for a while"

How long is "a while"?

If it's more than eighteen months with not a single module in production, the project is stalled. Not in progress: stalled.

I know because I've seen it many times. The modernization project that has been "in progress" for two or three years but has yet to produce anything concrete in production.

The causes are always the same: no initial technical assessment, a full-rewrite approach disguised as modernization, an internal team without specific experience on this type of project.

A modernization project done with the right method delivers the first production results within ninety days. If you're not seeing them, the problem is not the complexity of the system: it's the method.

"We make periodic changes to keep it up to date"

This is the most insidious one.

Changes made to a legacy system without a structured method do not reduce risk. They increase it.

Every change made without tests adds a new layer of unverified code on top of already existing layers of unverified code.

Complexity grows. Coupling increases. The number of "parts not to touch" expands.

When I'm asked to analyze systems "kept up to date" in this way, I invariably find the same thing: the most recently added part is the most fragile, because whoever wrote it was afraid to touch what was already there.

The system is larger than before. But it is no more controllable than before. Often it is less so.

"The key person is retiring in a few years"

"In a few years" is not a plan. It's a hope.

Modernizing a medium-sized system takes between six and eighteen months. Plus the technical assessment phase.

If the key person retires in two years and you start the technical assessment now, you're already behind.

If you wait another six months because "we need to close the quarter first" or "we'll finish this project first," you're a year behind.

A technology succession plan is not made when the person is already leaving. It is made while they are still in the company and can actively participate in the knowledge transfer.

Every month you wait is a month of knowledge you're not documenting, not transferring, not securing.

"We don't have the budget for a complete migration"

I understand. Budget is always a real constraint.

But the right comparison is not between the cost of modernization and the zero cost of doing nothing.

The right comparison is between the cost of modernization and the cumulative cost of what you're already paying every year.

Cost of growing maintenance. Cost of opportunities lost to impossible integrations. Cost of security risk. Cost of GDPR risk. Cost of operational risk if the key person suddenly leaves the company.

In most cases where I've done this calculation with companies, the cumulative cost of doing nothing exceeds the cost of modernization by the second or third year.

You're not saving money. You're pushing the cost into the future, where it will be higher.

How to modernize legacy applications without stopping the business

Strangler Fig pattern for legacy software modernization

Now we get to the part you actually want: how it's done in practice.

The most common fear among IT managers is this: to modernize I have to stop everything, rewrite from scratch, and hope it works.

That fear is understandable. And partly justified, if you use the wrong approach.

There is, however, a radically different method, proven on hundreds of real projects worldwide: incremental legacy application modernization.

Why a full rewrite almost always fails

The full rewrite is the intuitive approach. The old system is terrible, let's throw it away and rewrite everything from scratch.

The idea seems logical. The data says otherwise: it fails in 60-70% of cases.

Not from lack of technical competence in the team. But for a precise reason: in the old system there are years of business rules that nobody has ever documented.

Those rules are in the code. Some are bugs that over time have become expected behaviors for departments. Some are non-standard exceptions handled in ways nobody has ever put in writing.

When you rewrite from scratch, you lose those rules. You don't even know you've lost them.

You find out when the new system is in production and something stops working. Or when the warehouse manager tells you "it used to be different" and you don't know why it was different.

You stop production for eighteen months. You rewrite everything. You discover that half the operational rules were hidden in the code. The line stops. The investment is lost.

I've seen this scenario three times in my career. Three companies that lost years of work and considerable budget. In one case, the company ended up reverting to the old system after spending six months on the rewrite.

The Strangler Fig pattern: the method that actually works

Martin Fowler described an approach called the Strangler Fig Pattern.

The name comes from a wild fig tree that grows around a host tree. The fig doesn't fell the tree. It grows around it, slowly, until it completely envelops it. Then the original tree dies naturally, but the fig's structure is already there, fully formed.

The principle applied to software: you don't replace the legacy system all at once.

You build the new system around the old one, module by module.

Production never stops. The old system keeps running while the new one progressively takes its place.

The old system and the new coexist for a period. Then, when you're certain the new module behaves exactly like the old one (including the edge cases), you decommission the old one.

Repeat for each module. Slowly, without ever stopping anything, the old system is enveloped and replaced.

The five operational steps to apply it

In practice, the method is applied as follows, one module at a time.

  1. Identify the first module to modernize: not the most critical, not the largest. The least critical and most isolated one. The success of the first module builds the confidence and the method for the ones that follow.
  2. Write behavioral verification tests: before touching a single line of new code, capture the exact behavior of the old module with automated tests. These tests are the equivalence contract the new module must honor.
  3. Build the new module on the new technology platform: you build it in parallel, without stopping the old one. The old module keeps running normally.
  4. Parallel operation and validation: the new module runs alongside the old one for a defined period. Results are compared. If there is a divergence, it is analyzed and corrected.
  5. Stop using the old module and decommission it: only when the behavior is certified as equivalent do you shift real traffic to the new one. Then the old module is decommissioned. Repeat with the next module in the list.

This approach requires more discipline than a full rewrite. It does not require more time: it requires a different kind of work.

Production never stops. Risk is distributed over time, not concentrated in a single catastrophic release.

Behavioral verification tests: the invisible pillar of modernization

This is the point that almost nobody truly understands until they've lived it firsthand.

Before building the new system, you need to understand exactly how the old one behaves.

Not how it should behave according to the 2009 documentation. How it actually behaves today, including the bugs that have over time become expected behaviors.

You do this by writing tests that capture the current behavior of the system. These are called behavioral verification tests.

Every time the old system processes an invoice, handles an order or calculates a bill of materials, the test records the input and the output.

These tests become the contract the new system must honor. If the new system behaves differently, the tests tell you before anyone in production finds out.

I worked with a company where a bug in VAT rounding on invoices with variable-weight items had become, over years, the expected behavior in the sales department.

Customers expected that specific calculation. Sales factored it into quotes. Nobody had ever documented it because "that's how it's always worked."

The behavioral verification test captured it automatically. We replicated it in the new system, documented it and converted it into an explicit feature.

Without that test, we would have rewritten "correctly" and broken something critical for the business without knowing it.

How to choose the order of modules to modernize

The order of selection is not arbitrary. It follows precise criteria that balance risk and value.

There are two main criteria I use to build the priority list.

The first criterion is technical risk: which module generates the most bugs, requires the most urgent interventions, depends on the most obsolete technology with open vulnerabilities.

The second criterion is operational value: which module, once modernized, unlocks the integrations you've been waiting for, reduces the manual workload on departments or enables processes you can't do today.

The optimal order balances both: you start with something moderate in both dimensions, to build method and confidence, then move on to the more critical modules.

You never start with the most critical module for operations. Not on a first project. Not without having tested the method on something less risky.

Where to begin: the technical assessment and the first ninety days

Before writing a single line of new code, you need to understand exactly what you have.

This seems obvious. In reality, ninety percent of the failed modernization projects I've analyzed had skipped this phase.

They had gone straight to writing the new system based on assumptions. The assumptions were wrong. The project stalled within six months.

Why everyone skips the technical assessment and what happens

The pressure to skip the technical assessment always comes from above.

"We don't have time for analysis. We need to act." "We'll do the analysis as we build." "We already know the system well enough."

I've heard these phrases dozens of times. And in every case where the team followed them, the project ended up in difficulty.

What happens when you skip the technical assessment?

You start building the new system based on what you think you know about the old one. Halfway through the project you discover the old system has a calculation logic in a secondary module that you didn't include in the specifications.

That secondary module affects three other modules you've already built. You have to go back. Weeks of work to redo.

Three weeks of technical assessment at the start save months of rework mid-project.

What a technical assessment maps and what it produces

A technical assessment is not a meeting with sticky notes. It's structured technical work that produces a precise document.

The assessment answers these concrete questions.

  • How many modules does the system have and how do they depend on each other? Where are the natural boundaries between modules?
  • Which modules are critical for operations and which are secondary or replaceable?
  • Where do the most frequent bugs and most costly change requests concentrate?
  • Which business rules are in the code but not in the documentation? Where is the unwritten business logic hiding?
  • With which external systems does the software integrate and how? Which of these integrations are documented and which are informal workarounds?
  • What is the state of the code: cyclomatic complexity, circular dependencies, areas with zero tests?
  • Which technologies are unsupported and what known vulnerabilities do they carry?

The output of the assessment is a system map: modules, dependencies, risks and a prioritized list of interventions.

With that map, the modernization project starts on solid ground. Without it, it starts on assumptions.

The technical tools for a serious assessment

A professional technical assessment uses specific tools, not just interviews.

  • Static code analysis: tools like SonarQube or NDepend map cyclomatic complexity, class dependencies and modules with the most structural issues.
  • Dependency mapping: a visual graph of dependencies between modules and external libraries reveals hidden couplings that nobody had ever made explicit.
  • Integration inventory: a complete list of all external systems the software communicates with, including informal integrations via shared files or databases.
  • Structured knowledge transfer sessions: not free-form interviews. Sessions where the key person works on the code while an experienced technician notes the implicit decisions.
  • Load testing on critical modules: tests are run to understand where the system has performance issues under load, before migrating those modules.

This phase takes on average thirty days for a medium-sized system. It's not wasted time: it's time invested to avoid losing three times as much later.

The ninety days from assessment to the first module in production

The first concrete goal of any modernization project is to have something real in production within ninety days.

Not a demo. Not a prototype that "will work when it's complete." Something real that processes real transactions with real data.

  1. Days 1-30: complete technical assessment. Map of all modules, dependencies, integrations and critical points. Identification of the first module to modernize.
  2. Days 31-60: building the module and behavioral verification tests. The new module is built in parallel. Tests are written on both the old and the new for continuous comparison.
  3. Days 61-90: parallel operation and validation in a pre-production environment with real data. The new module runs alongside the old one. Results are compared. Divergences are corrected.

At the end of ninety days you have three concrete things: the exact map of the system, a modernized module working in production, and technical certainty that the method works for your specific system.

It's the point where you stop acting out of fear and start acting by method.

Migration from .NET Framework to .NET 10: what really changes and how to prepare

Migration from .NET Framework to .NET 10 in Italian companies

Your system runs on .NET Framework. You might not even know which exact version.

That's the case for most Italian manufacturing SMEs. It's not a competence problem: it's a technological legacy accumulated over years of "it works, don't touch it."

The goal of .NET Framework to .NET 10 migration is not to chase the latest technology. It's to get out of the blind spot: the zone where every security update doesn't arrive, every modern integration is impossible and every new developer refuses to touch the code.

The .NET course covers the modern platform from the ground up and is useful for guiding the internal team through the migration.

Why migrate directly to the latest version and not stop halfway

Many IT managers ask: why not do an intermediate migration?

The answer is this: every migration has a cost. Paying it twice makes no sense.

If today you migrate to an intermediate version, in two years you find yourself again with a system on an unsupported version. The cycle restarts: tests to update, dependencies to verify, documentation to redo.

The right migration is the one that takes you where you want to be in five years, not two.

Once. Done properly. With the team trained to maintain the system without depending on anyone.

The technologies that require a true rewrite

Not everything migrates the same way. Some .NET Framework technologies have no direct equivalent in .NET 10 and require a genuine rewrite.

  • ASP.NET WebForms: has no equivalent in .NET 10. Must be rewritten in ASP.NET Core MVC or Blazor. The Strangler Fig pattern applied at the level of individual pages is the most effective method.
  • Windows Communication Foundation (WCF): does not exist in .NET 10. Replaced by CoreWCF for partial compatibility, or by gRPC for internal services, or REST APIs for external integrations.
  • Entity Framework 6: migrates to Entity Framework Core. The APIs are similar but not identical. Some LINQ queries must be rewritten because the SQL translations differ.
  • Web.config: replaced by appsettings.json with native Dependency Injection. Configuration for multiple environments (development, staging, production) becomes simpler and more readable.
  • COM Interop: the most complex case. COM components can be wrapped in a separate .NET Framework service that exposes a REST API. The rest of the application migrates to .NET 10 communicating with that transitional service.

For data access code, the Entity Framework course covers the migration from Entity Framework 6 to Entity Framework Core in a progressive way.

For web interfaces, the Blazor course is the most direct path for those who need to move away from ASP.NET WebForms.

The testing chapter in migration: where almost everyone gets it wrong

In a legacy system without automated tests, before migrating you must write behavioral verification tests.

These tests capture the current behavior of the system, including the bugs that over time have become expected features.

This is the phase that almost everyone underestimates or ignores entirely.

Those who ignore this phase end up with a migrated system that behaves slightly differently from the original. They find out when the production line is already running.

The behavioral difference is not always a technical bug. It is often an implicit business rule that the old system implemented in a particular way.

This phase requires on average forty percent of the total work in a migration project. Those who don't plan for it end up working overtime in the weeks after go-live.

Realistic timelines for migration by system size

The estimates I give here are based on real projects. Not on theory.

System sizeLines of codeEstimated duration
Small applicationUp to 50,0003-6 months
Medium application50,000 - 200,0006-18 months
Large applicationOver 200,00018-36 months

These estimates include the technical assessment phase and the writing of behavioral verification tests. They are not "development only" estimates.

The right comparison is not between the cost of migration and zero cost. It is between the cost of migration and the cumulative cost of growing maintenance, security vulnerabilities, GDPR risk and key person dependency.

For an in-depth look at software architectures suited to modern enterprise systems after migration, I've described the most effective patterns in the manufacturing context.

It's also worth understanding why software should be treated as a company asset: the value is not only in the code, it's in the operational rules that code contains.

The mistakes that sink modernization projects: what nobody tells you beforehand

I've seen many modernization projects go wrong. Not because the companies were incompetent. Because they made predictable mistakes that the right experience allows you to avoid.

This section is the most useful in the entire article for anyone about to start a project.

Read it carefully. Every mistake I'll describe I've seen made at least three times.

Mistake 1: starting without a technical assessment

"We already know the system." "We'll do the analysis as we build." "We can't afford six weeks of analysis before starting."

Every time I hear one of these phrases, I already know how the project will end.

The project starts fast. In the first three months it seems to be going well. In month five the first unmapped dependency emerges. In month six the rework begins. In month nine the project is stalled.

This is not a prediction. It's a pattern that repeats with mathematical precision.

The technical assessment is not a luxury. It's the investment with the highest return in the entire project.

Thirty days of analysis at the start saves six months of rework in the middle.

Mistake 2: migrating without behavioral verification tests

This mistake always manifests in the same way: the migrated system goes into production, everything seems to work, then after two weeks the reports start coming in.

"The bill of materials calculation is different from before." "The delivery notes have a slightly different total." "The monthly report doesn't match what we had before."

These are not technical bugs. They are implicit behaviors of the old system that the new system hasn't replicated because nobody documented them.

Without behavioral verification tests, you have no equivalence contract. You don't know if the new system behaves like the old one. You find out in production.

Finding out in production costs ten times more than finding out during the parallel validation phase.

Mistake 3: assigning the migration solely to the internal team without mentorship

The internal team knows the business. It knows the processes. It knows the exceptions that nobody has ever written down anywhere.

What it often doesn't know is how to manage a migration of this scope. Not from lack of technical competence. From lack of specific experience on this type of project.

Migrating a legacy system in production without stopping the business is a different problem from building a new system from scratch.

The pitfalls are different. The techniques are different. The risk points are different.

A team tackling this type of project for the first time pays a learning curve. That learning curve is measured in months and costly mistakes.

The solution is not to replace the internal team with external consultants. It's to support the internal team with someone who has already navigated those waters.

The mentorship transfers competence. At the end of the project the team knows how to manage things on its own. There are no dependencies on consultants.

Mistake 4: the rewrite disguised as incremental modernization

This is the most subtle mistake and the hardest to recognize from the inside.

The project starts with the right intentions: incremental modernization, one module at a time.

But then something happens. You start "optimizing" the first module. Then you decide to rewrite the data structure as well. Then you add a completely new interface.

After six months you find yourself rewriting everything from scratch, but with the name of the "priority module" still at the top of the project document.

The warning signal is this: if after three months of the project you still have nothing in production, you are not doing incremental modernization.

The incremental method produces results in production within ninety days. Always.

If that's not happening, stop. Revise the method. Don't wait another six months hoping the situation will change.

Mistake 5: not training the team on the new system during migration

A technology migration is also a skills migration.

If the team has worked for years on .NET Framework and WebForms, it cannot immediately switch to .NET 10, ASP.NET Core MVC and Entity Framework Core without a structured learning path.

The temptation is to do the migration quickly and train the team afterward. This is a mistake.

A team that doesn't understand the new system doesn't know how to maintain it after migration. It reverts to depending on external consultants. Or it introduces into the new system the same problems the old system had.

Training must happen in parallel with the migration, not after. The team must learn while building, with mentorship on real cases.

It's the only way to ensure that at the end of the project you have both a modern system and a team capable of maintaining it autonomously.

Why working with someone who's done this before reduces risk and accelerates results

Modernizing a legacy system is not a normal development project. It's surgery on a patient who cannot be anaesthetized.

The system must remain in production. The data must stay intact. The business rules must be replicated exactly, including those nobody has ever documented.

An internal team that has never tackled a migration of this scope pays a learning curve in delays, mistakes and, in the worst cases, corrupted data or production outages.

This is not a question of team competence. It's a question of specific experience on a type of problem that is rarely encountered.

The learning curve you pay in delays and mistakes

Every migration project has its specific pitfalls. But the common pitfalls are always the same.

Those who have already encountered them on other projects recognize them immediately and handle them before they become problems.

Those who encounter them for the first time recognize them only after they have already caused damage: weeks lost, modules to redo, data to reconcile.

I calculated, analyzing three migration projects done internally without mentorship, that the learning curve extended the projects by sixty percent versus the initial estimates.

They were not incompetent teams. They were competent teams facing for the first time a specific type of problem.

The difference is not competence: it's specific experience.

Who I am and what I've already done

I've been working on real .NET systems for twenty-five years, from the first versions of the framework through to .NET 10.

I founded and sold multiple software companies, including LegalDesk, sold to SEAC SpA after years of development on large-scale production systems.

I've worked with NATO, with FIAT and with GlaxoSmithKline. I know what it means to build and modernize systems that cannot afford to stop.

Today I work alongside Italian manufacturing companies with internal development teams. I don't replace the team: I guide it while it modernizes its own system.

In one project with a manufacturing company in Veneto we modernized a system of 180,000 lines of code in eighteen months without a single day of production downtime.

The secret was not the technology. It was the discipline in following the method and the experience to recognize problems before they manifested.

What an experienced consultant concretely brings to your project

The value I bring is not the new technology: it's not losing data, not stopping production and not wasting budget on a half-abandoned rewrite.

  • Complete technical assessment in thirty days instead of ninety: I know exactly where to look and what to look for. I don't start from scratch on every system.
  • Intervention plan based on real risk, not theoretical assumptions. The prioritized module list takes into account both technical risk and operational value for your specific business.
  • Internal team mentorship: I don't work in isolation. I work with your team, transferring the method. At the end of the project the team knows how to manage and maintain the new system without depending on consultants.
  • Early recognition of pitfalls: I've already seen the most common traps. I recognize them before they become problems. This saves weeks of rework.
  • Zero production downtime: this is not a marketing promise. It's a method with twenty-five years of field-tested proof and a track record I can document.

The right question to ask before deciding

The right question is not "how much does it cost to work with an experienced consultant?".

It's "how much does it cost to get the migration wrong on your own?"

A failed migration project doesn't just mean money lost. It means months of team work wasted. It means the old system still in production, with all its risks, after a year of work.

It means going back to square one with less budget and a demotivated team.

The cost of error on this type of project is high. The cost of doing things right from the start is far more contained.

Those who call once call again. Not because the contract binds them, but because the method works and the results are measurable.

Domande frequenti

A legacy system is an application that can no longer be modified with confidence, regardless of its age.

Typical characteristics include: no automated tests, outdated or missing documentation, dependence on unsupported technologies like .NET Framework 2.0 or Visual Basic 6, knowledge concentrated in a single person.

A 2010 system can be legacy. A 1990 system might not be. The determining factor is control, not the year it was built.

The main signs are: every change takes weeks instead of days, fixing one bug generates others in apparently unrelated parts, only one person truly understands the system, the software runs on unsupported technologies or hardware, integrations with external systems are impossible or require costly workarounds, the team is afraid to touch certain areas of the code.

If more than three of these signs are present, the system is legacy.

Legacy software modernization is the process of transforming an obsolete system into one that is maintainable, secure, and integrable.

It does not mean rewriting everything from scratch. It means adopting an incremental approach that replaces parts of the system module by module, while the company continues to operate.

The main strategies are the Strangler Fig method, progressive code restructuring, and layer-by-layer technology migration.

A complete rewrite (big bang rewrite) means stopping everything, rewriting from scratch and releasing after 12-24 months. It fails in 60-70% of cases: business rules hidden in the code are lost or incorrectly replicated.

Incremental modernization replaces the system piece by piece, always keeping a working version in production. Risk is distributed over time, not concentrated in a single catastrophic release.

Timelines vary based on system size: up to 50,000 lines require 3-6 months, between 50,000 and 200,000 lines require 6-18 months, over 200,000 lines require 18-36 months.

The right comparison is not with the zero cost of doing nothing. It is with the cumulative cost of growing maintenance, security vulnerabilities, GDPR risk, and dependence on a key person.

Migration follows three phases: technical analysis (mapping modules and business rules hidden in the code), behavioral verification tests (tests capturing current behavior including undocumented behaviors), incremental migration by layer or module with no production downtime.

ASP.NET WebForms must be rewritten in ASP.NET Core MVC or Blazor. WCF is replaced by CoreWCF or gRPC. Entity Framework 6 migrates to EF Core with API changes.

The Strangler Fig pattern, described by Martin Fowler, is an incremental approach inspired by a wild fig tree that grows around a host tree until it completely replaces it.

In practice: identify the least critical module, build the modern version in parallel, run it alongside the old one for a validation period, shift real traffic only when behavior is certified equivalent, decommission the old one and repeat.

The old system remains in production throughout the transition.

Legacy systems are almost always non-compliant with GDPR because they were built before 2018.

Typical risks include: inability to export or delete a specific data subject's data (right to erasure), absence of personal data access logs, digitally untracked processing consent, inability to demonstrate the legal basis for processing.

The Data Protection Authority has already issued fines to Italian SMEs for technical inadequacy. Modernization resolves these issues structurally, not with temporary fixes.

Leave your details in the form below

Matteo Migliore

Matteo Migliore is an entrepreneur and software architect with over 25 years of experience developing .NET-based solutions and evolving enterprise-grade application architectures.

Throughout his career, he has worked with organizations such as Cotonella, Il Sole 24 Ore, FIAT and NATO, leading teams in developing scalable platforms and modernizing complex legacy ecosystems.

He has trained hundreds of developers and supported companies of all sizes in turning software into a competitive advantage, reducing technical debt and achieving measurable business results.

Stai leggendo perché vuoi smettere di rattoppare software fragile.Scopri il metodo per progettare sistemi che reggono nel tempo.