
Let's be clear right away.
Maybe you've heard this term, UI composition, thrown out there by some technician during a meeting or read in some blog full of big words.
They will have made you believe that it is the last frontier of software development, a kind of magic formula for creating desktop graphical interfaces (think of your Windows apps) in a super modern and flexible way.
The idea itself is not complicated: instead of building your window "in one piece", you assemble it by putting together many small independent bricks, called "components" or "modules".
The idea is that these bricks are reusable, individually testable, easy to upgrade. A bit like Lego, right?
There are also software tools, specific libraries for developers (someone may have mentioned high-sounding names), which were created precisely to facilitate this approach.
They promise wonders: flexible, scalable, future-proof software... all nice words that fill the slides.
But the real question is: is it really for you? Does it serve your specific project, with your resources, your team, your budget? Does it solve a problem you actually have?
In my experience, accumulated over many years spent seeing software projects being born, growing and unfortunately sometimes crashing precisely because of inadequate technical choices, the answer is almost always a firm NO.
I'm talking about the reality that I know best: that of Italian software houses and above all of manufacturing companies that develop software internally, of development teams within companies, small or medium groups (from 1 to 5 people, sometimes little more) that work hard to create management software, office applications, specific utilities.
In this context, in 99% of cases, UI composition as it is often proposed is totally useless.
Not just useless, but actively harmful.
Why? Because it introduces layer upon layer of technical complexity where there was no need for it.
It complicates the code, lengthens the time needed to develop any functionality, and therefore inflates costs unjustifiably.
All this, without bringing a tangible benefit either to you, who pays, or to the end user, who uses the application.
It's a textbook example of over-engineering: using a nuclear weapon to crack a nut.
It is done because "it is modern", because it allows some technician to feel at the forefront, to add an acronym to the CV, or perhaps because an external consultant has to justify his (expensive) presence by proposing hyper-sophisticated solutions
But almost never because it solves a real project problem efficiently and economically.
In our sector there is a continuous push towards novelty, towards the "of the moment" framework, towards the architectural pattern that is fashionable on Twitter.
But real skill isn't in uncritically adopting the latest trend.
It lies in deeply understanding the problem you need to solve and choosing the simplest and most effective tool to do it, taking into account the concrete constraints: the time available, the people who will work on it, and above all the budget.
And it is essential that you, as a client, manager or project manager, are armed with this critical spirit so as not to be enchanted by technological promises that could turn out to be a costly boomerang.
The (false) promise: reusable components, modularity, maintainability

Those who propose UI composition as the main path will paint an idyllic picture, based on three great promises which, taken individually, all seem reasonable.
The problem is that, in the context of standard desktop applications developed by small teams, these promises rarely materialize as hoped, and often hide unwanted side effects.
Let's look at them one by one with a critical eye.
The siren of reuse: miracle components?
The first promise is that of reusable components.
"Think how nice it is," they'll tell you, "we create the grid to show the data just once, with all the sorting, filtering and pagination functions, and then we use it in all the windows: customers, suppliers, items, orders... a huge saving!".
In theory, great.
In practice, to make that grid work exactly as needed in such different contexts, you have to fill it with options, parameters, custom events, conditional logic.
It must accommodate different data types, varying numbers of columns, specific behaviors required in one window but not in another.
The result?
That "reusable" component becomes a monstrously complex piece of software, difficult to develop, to test in all its variations and, above all, to maintain.
Often, you end up spending more time creating and debugging the "generic" component than you would have spent creating 3 or 4 specific, simple and clear grids in the places where they were needed.
Reuse pushed to the extreme, especially in the user interface which always has many small exceptions, is often a complexity trap.
The myth of independent modularity
The second promise is that of modularity.
“Let's divide the application into modules!”, they will suggest.
One module for records, one for invoicing, one for the warehouse. So they are independent, each can be developed and tested on its own, perhaps by different people!
Again, this sounds great, especially if you're thinking about huge projects.
But let's put it into your reality: a team of 3-5 people working on the same management software.
Physically dividing code into separate modules (often even different projects within Visual Studio) introduces a lot of practical complications.
You have to manage the dependencies between these modules (because, let's face it, they will never be truly independent: invoicing will need the master data!).
You have to set up complex mechanisms to make them communicate with each other.
You have to manage the process of compiling and installing all these separate pieces.
For a small team, this physical "modularity" is almost always an unnecessary burden.
Good logical organization within a single project is much more efficient.
Don't confuse physical separation of files with logical separation of responsibilities.
The illusion of simplified maintenance
The third promise is the consequence of the first two.
True, the change is localized.
But the impact of the change?
You have to be very sure that changing that piece doesn't break anything in all the other modules that perhaps used it or received data from it.
You need to test all interactions.
And above all, you have to maintain not only the "functional" code, but also all the "infrastructure" you have built to make this complex architecture work: the systems to manage dependencies, the message buses, the abstract interfaces...
Often, maintenance shifts from modifying the business logic (which may have been simple) to debugging the abstruse mechanisms of the architecture itself.
These three promises, although valid in certain extreme contexts, for the majority of desktop applications become a mirage that leads you astray, causing you to accumulate additional costs and technical complexity without a commensurate return in terms of quality or efficiency.
The reality in real projects (the Italian ones): what happens when you fall for it

Let's put theory and glossy slides aside.
I want to tell you what I see happening, almost like a script that repeats itself, when a "normal" development team, perhaps under the pressure of good intentions but little practical experience on a large scale, decides to embrace the philosophy of UI composition pushed for a standard desktop project.
Imagine the scene: they have to develop management software for a manufacturing company.
Classic features: customers, suppliers, warehouse, orders, production, invoicing.
A medium-sized project, but fundamental for the customer.
It is decided to use a "modern" architecture based on composition and separate modules.
The first months are dedicated to "architecture" (with a capital A).
The team doesn't start by creating the customer window.
No, he spends weeks on:
- Install and configure composition-specific software libraries (the "framework").
- Decide how many modules to create (Customers? Customers and suppliers together? Is Shipping a separate module?). Hours of academic discussions.
- Set up the system for dependency management (a complex way to manage how pieces "talk to each other").
- Define "zones" in the main interface (the areas where modules will be dynamically loaded).
- Establish the messaging system for the modules to communicate with each other.
All this produces a lot of "infrastructural" code, difficult to understand for those who have not written it, but no functionality visible to the customer. Time passes, the budget starts to shrink, and the application is still a ghost. This is the huge hidden cost of initial setup, an "investment" in complexity that rarely pays off on this scale.
Then you start to develop, but it's an obstacle course.
Finally the first windows are created.
But each operation is slower and more cumbersome than normal.
To show a list of customers, you don't just need to upload it and view it.
You need to create a "component" for the list, a "component" for the filter bar, a "component" for the buttons, each with its own separate piece of logic.
Then you have to make them communicate: the filter must tell the list to update, the list must tell the buttons if a customer is selected...
The code is filled with events, messages, interfaces.
You or your developers, instead of focusing on the business logic (e.g. how to calculate the customer discount), spend half the time making the architecture itself work, "passing the ball" between the various components correctly.
Productivity collapses.
Debugging becomes a recurring nightmare
Inevitably, something doesn't work. A calculation is wrong, a piece of data doesn't update, the application slows down.
Finding the cause is like looking for a needle in a haystack spread across ten different rooms.
Is the problem in component A?
Or in component B that sends the data to it?
Or in the C service that loaded them?
Or in message D that didn't go?
You have to use advanced debugging tools, put breakpoints everywhere, mentally reconstruct the intricate flow through all levels of abstraction.
Precious (and expensive) hours are burned solving problems that would take minutes in a simpler system.
Eventually, you notice the "false modularity"
After months of work, you realize that "independent" modules are not "independent" at all.
The form billing it's unusable without that customers and that orders.
Change one small thing in the form warehouse requires you to also update three other modules.
The physical separation into different projects was only a complication: the modules are strongly logically coupled.
You only got the disadvantages of distribution (complexity of build, deployment, communication) without the advantages of independence.
The end result?
An application delivered late, costing much more than expected, more difficult for the developers themselves to use (and therefore to maintain), and often less stable.
A victory for technology as an end in itself, a defeat for pragmatism and for your budget or that of your client.
Because you often complicate the interface without really simplifying the code
I want to delve deeper into a point that in my opinion is central: the illusion that complicating the technical structure of the code automatically leads to a benefit, perhaps making it "simpler" to manage complex interfaces.
It's a misconception, based on a misunderstanding of what true simplicity in software development is.
Think about what a desktop application you use looks like.
It has windows, right? Inside the windows there are buttons, text fields, tables, graphs.
The user interacts with these elements.
The "complexity" for the user lies perhaps in the quantity of functions or in the clarity with which they are presented, certainly not in how the developer has decided to organize the code "behind the scenes".
When you adopt a very advanced UI composition, you are in fact imposing a complex technical structure (many small pieces, indirect communication, abstractions) to represent something that, from a logical and visual point of view, could be much simpler.
You're adding intermediate layers between the intention ("I want to show customer data") and the result ("I see customer data on the screen").
Each additional layer of abstraction, each separate "component", each indirect communication mechanism, introduces different concrete problems.
It increases the amount of code to be written, not only the useful one but also the "glue" one.
It makes the code harder to read and understand, because you have to follow the flow through multiple files and layers.
It multiplies the points where bugs can arise, because every interface between components is a potential source of error.
It makes debugging a longer and more complex operation.
Finally, it introduces rigidity: changing something may require cascading changes on multiple levels.
All this to get, in the end, the same window with the same buttons and tables that you would have obtained with a more direct approach?
Where is the advantage?
There is not, except in very particular cases (which we will see).
In most projects, this technical complexity is just ballast.
It slows down development, increases costs, makes software more fragile.
True elegance in development lies not in creating bizarre architectures, but in finding the simplest possible solution that solves the problem effectively and robustly.
The pushed UI composition, in 99% of cases, goes in the opposite direction.
You're paying a high price for complexity you don't need.
Modularizing too often just means hiding confusion

Modularity, the idea of dividing a complex problem into smaller, more manageable parts, is a good idea in general.
But the devil, as they say, is in the details.
And in the way it is applied.
The approach modular driven by UI composition often leads to excessive and counterproductive fragmentation. You create dozens of modules, perhaps each in its own separate project, thinking you've achieved the perfect order.
But then you realize that these modules are not independent at all.
You discover that module A needs a piece of data that is in module B, which in turn needs to trigger an action in module C.
How do they talk to each other?
You have to introduce complex communication mechanisms: shared interfaces, bridging services, centralized messaging systems...
All additional infrastructure you need to design, implement and maintain.
In the end, independence is just an illusion.
You've replaced direct (and easy-to-see) dependencies within a single project with indirect (and hard-to-trace) dependencies between separate modules.
It's like trying to bring order to a chaotic desk by moving each object to a different box: you have lots of boxes, but the desk is still unusable because to do anything you have to open ten different boxes.
This fake modularity is dangerous.
It increases the complexity of the software compilation process.
It complicates the installation, because you have to make sure you distribute all the right pieces.
It makes making structural changes to the code (refactoring) riskier, because it is not immediately obvious which other modules might be impacted.
And best of all, it masks real design problems: Instead of simplifying messy logic, you break it up and hide it in different modules, making it even harder to fix later.
For the vast majority of desktop applications, good logical organization within a single project is the key to true maintainability and clarity.
Don't be fooled by the siren of physical modularity at all costs.
The cost of over-engineering: time, bugs and effort (paid by you)

I want to be brutally honest on this point, speaking directly to you who perhaps manage a company or a team and have to make ends meet: over-engineering, such as that which derives from the dogmatic application of UI composition where it is not needed, is not a technical detail for professionals. It's a business problem. It's a heavy cost that you end up paying, in different but all harmful ways.
The most immediate and easy to understand cost is the longer development time.
If to create a feature that, with a simple approach, would require 5 days of work, the team takes 15 because it has to follow the dictates of a hyper-complex architecture, those 10 days of difference are an out-of-pocket cost for you.
These are development hours you pay to produce technical complexity, not direct functional value for your users or your business.
Multiply this by the entire application and you understand how budgets can balloon for no real reason.
It's like paying an architect to design a doghouse with the same techniques and materials as a skyscraper: technically impressive, but an absurd waste of resources.
Then there is the cost of lower quality and additional bugs.
It is not true that more complex software is automatically better.
In fact, it's almost always the opposite.
The more code there is, the more layers of abstraction there are, the more intricate interactions between the moving parts, the easier it is for something to break, especially in subtle and hard-to-predict ways.
Bugs in over-engineered systems are often harder to diagnose and fix.
This means more time spent debugging (other costs), less reliable releases, and more inconvenience for those using the software (which could be you or your customers).
Simpler software is almost always more robust and reliable software.
Don't forget the hidden but very heavy cost of future maintenance.
An application built on an unnecessarily complex foundation becomes a burden over the years.
Every time you have to add a new feature, adapt the software to new regulations, or simply correct a behavior, the intervention is longer, riskier and more expensive.
If the developer or small team that had that specific architecture in mind leaves, you could find yourself in serious trouble, with a system that no one fully understands anymore and that is difficult to intervene on.
It's the classic "technical debt": you saved (maybe!) some critical thinking at the beginning by choosing the most "fashionable" solution, and now you pay interest for years in the form of slow and expensive maintenance.
Finally, consider the human cost: the team frustration.
Good developers love solving complex problems, but hate struggling with artificial and unnecessary complexity.
Working on a cumbersome system, where even simple changes require pindatric turns, is demotivating.
It leads to lower productivity, stress, and can increase the risk that the best people will seek more peaceful (and productive) shores.
And finding substitutes capable of navigating that specific complexity will be more difficult and expensive.
So, the next time they propose a technical solution that sounds very complex and "advanced", ask the fundamental question:
Ok, nice, but what are the concrete benefits for us, for our budget, for our ability to maintain and evolve this software in the future?
Is this complexity justified by the problem we need to solve?
If the answers are vague or too academic, raise your antennae.
You may be about to pay the price for over-engineering.
What really works in WPF and MAUI desktop projects in Italy

After having critically analyzed what doesn't work in most cases, it is right to focus on what does work.
What is the approach that, in my experience, allows small to medium teams to build great, robust, maintainable, and delivered desktop applications without exploding costs?
It's not a magic formula or the latest shiny technology.
It's a return to fundamentals, to pragmatic software engineering, focused on simplicity and clarity.
Here are the ingredients that I see give the best results, the ones that I always recommend to my clients:
Clear logical organization
The fundamental point is to clearly separate the visual aspect of the interface — typically written in XAML — from its behavioral logic (what happens when you click a button, what data to show).
There are various ways to do this, but the important thing is the principle: keep the graphics code separate from the underlying logic and business data.
This separation makes everything more understandable and manageable, without the need for unnecessary complications.
A project, a lot of internal organization
For the vast majority of applications, there's no need to split your code into dozens of separate projects.
A single main project, but impeccably organized within it with specific folders (one for windows, one for associated logic, one for data models, a crucial one for business "Services", one for shared utilities...), is much easier to manage, compile and deploy.
Order is created with internal discipline, not with technical fragmentation as an end in itself.
Strict separation of responsibilities
This is an indispensable pillar.
The logic that manages a window should NEVER contain code that talks directly to the database, or that implements complex business rules (such as calculating a quote).
To simplify, these important logics must be in dedicated classes, which are totally independent of the graphical interface.
Window logic uses these services to get data or perform operations.
So, if a business rule changes, you edit it in one place, and you can test that rule regardless of the interface.
Custom user controls? Judiciously!
Technologies for creating desktop interfaces allow you to create reusable "pieces".
But don't abuse it.
It makes sense to create a custom control only if you have a really complex piece of interface (not a simple text field, but maybe a graphical editor, an interactive map viewer...) that you need to reuse identical in many different windows and which has its own complex internal logic.
For everything else, use the standard controls that the technology offers you.
The less custom code you write, the less you have to maintain and the less you risk introducing bugs.
External libraries: true intelligent reuse
If you need a data grid with a million features, a professional charting system, a way to generate complex reports or export to PDF/Excel, DON'T start building it from scratch!
It would be an expensive folly and you would probably never get to the quality of the specialized components available on the market.
There are great component libraries (some paid, some free) that give you these features out of the box, tested and optimized.
Choosing the right libraries and integrating them well into your project is the smartest and most cost-effective way to get advanced functionality without reinventing the wheel.
This is reuse that brings real value to your project and your budget.
This approach, based on pragmatism and simplicity, it doesn't scream the latest technological fashion, but I assure you that it is what brings home the result: software that works, delivered on time and on cost, and which can be maintained and evolved over the years without becoming a nightmare.
It is the approach that respects your investment and the professionalism of those who work there.
When (and only when) does it make sense to use UI composition in WPF/MAUI

Allow me to repeat it again because it is really the heart of the discussion: in 99% of projects of desktop applications that you are probably managing or developing (the management software for production, the app for the technical office, the software for controlling a machine), the extreme UI composition, the one that requires complex frameworks and separate module architectures, is the wrong choice.
It's like using a chainsaw to peel an apple: a waste of energy and an unnecessary risk, which only causes damage.
However, there is that 1% of cases, very particular situations, in which the chainsaw is really useful.
It is important to know how to recognize them so as not to apply the same logic to completely different contexts.
It is not a question of nuances, but of abysmal differences in the scale, objectives and organization of work.
Let's look at these exceptional scenarios in detail:
GIGANTIC global enterprise applications
Here I am not talking about the management system, however complex, of the typical Italian company.
I'm talking about software that is the operational heart of multinationals, global investment banks, international airlines.
Systems with hundreds or thousands of screens, interconnected business logic on a planetary scale, extreme security and reliability requirements.
To develop and maintain monsters like this, you need huge teams (hundreds of developers) spread across the world.
In this context, a very thorough physical decomposition into independent modules, managed with composition frameworks and rigorous processes, becomes a vital necessity in order to manage complexity, coordinate the work of many teams, isolate changes and guarantee the stability of the system.
Modularity here is not a technical quirk, but a forced response to an unusual operational and organizational scale.
Platforms designed to be extended by third-party plugins
This is a specific business model.
You are creating software that is not "finished", but is an open platform, designed specifically for other companies or developers to create and add their own features (the "plugins" or "extensions").
Think of graphics software to which you can add new filters or brushes, or a development environment to which you can add support for new languages.
To allow this, the architecture must be "open" and provide standard mechanisms for loading external code (often distributed as separate files) and integrating it at both the logical and user interface levels.
Here, composition and dynamic loading techniques are essential due to the very nature of the product.
But if you are building a "closed" application for a specific customer or for internal use, this need does not exist.
Software sold as a standard product but with extreme customization needs
Imagine having a software product that you sell to dozens of different customers on a white label or multi-tenant basis, but each customer doesn't just want their logo, they want an almost unique experience.
They want to be able to activate or deactivate entire functional modules (e.g. e-commerce yes, warehouse management no), they want to add customized data fields that automatically appear in forms and reports, they perhaps want a completely different operational flow for some key procedures.
To handle such high variability without having to maintain dozens of different versions of the software, you may need a very flexible architecture, where the interface itself is "assembled" on the fly based on the customer (tenant) configuration, loading the right pieces and adapting the layout.
It's a significant technical challenge, justified only by a business model that requires this extreme level of customization at scale.
Normal customizations are handled in much simpler ways.
Management of large (>30/40 people) and geographically distributed development teams
Sometimes, the choice of a separate module architecture is dictated more by organizational needs than by the intrinsic techniques of the application.
If you have a very large development team, perhaps spread across different locations or working with methodologies that require strong parallelization, dividing the application into modules may seem like a way to reduce interference between sub-teams and facilitate independent work.
With my long experience I suggest caution: if the modules are not logically well separated and independent, this physical division can create more communication and integration problems than it solves.
It is often better to invest in good source code management practices and team communication rather than an artificially complex UI architecture.
It is a solution to be considered only in cases of really large teams and only if accompanied by adequate organizational processes.
As you can easily understand, these are situations that are very far from the daily life of most Italian companies and software houses.
If your project clearly doesn't fall into one of these borderline scenarios, then the most logical, wise, and cost-effective conclusion is to stay away from hard UI composition.
It is not the solution to your problems, it would only risk creating new and more expensive ones.
Practical example: how to make a desktop UI simple without getting stuck in unnecessary patterns

I want to make the difference between the two worlds even more concrete.
Think again about having to create the window to modify a customer's data.
I won't show you lines of code, but I will describe what choosing one path or the other means for you, your team and your wallet.
If you take the path of over-engineering (pushed UI composition)
- The start of the project will be slow and expensive, because much of the initial time will be spent setting up the complex architecture, choosing and configuring specific libraries, defining modules and abstract interfaces, before even producing visible functionality.
- Developing each individual window will take longer than necessary, because you will have to break the logic and interface into many small separate components and then write more "glue" code to make them work together in a coordinated way.
- Source code will become much more bulky, fragmented, and difficult to understand as a whole, requiring more time to debug and onboard new people to the team.
- The probability of introducing subtle bugs, related to complex interactions between components or the architecture lifecycle, will be higher.
Future maintenance (adding fields, modifying logic, correcting errors) will be slower, riskier and therefore more expensive.
It will require staff with specific expertise on that architecture.
- The total cost to develop and maintain the application will be significantly higher.
If you instead choose the path of pragmatism (simplicity and good organization):
The start of the project will be rapid.
You can start developing the features requested by the customer almost immediately, focusing on the business value.
The development of each window is faster and more direct.
The logic is concentrated and easier to implement and test.
The source code will be more compact, easier to read, understand and navigate, even for those who didn't write it.
- Debugging will be faster and more effective, because potential points of failure will be more limited and easier to isolate.
Future maintenance will be more agile and economical.
Changes will take less time and can also be done by developers with standard skills.
- The total cost to develop and maintain the application will be significantly lower.
Let's summarize to compare when the two approaches make sense.
| Scenario | Quando (forse) ha senso usare la UI Composition |
|---|---|
| Applicazioni enterprise globali con requisiti estremi | Progetti mastodontici con centinaia di schermate e team distribuiti in tutto il mondo. Qui la UI composition spinta serve davvero per coordinare il lavoro, mantenere la scalabilità e gestire la complessità. |
| Piattaforme estensibili con plugin di terze parti | Se stai creando un software pensato per essere arricchito da estensioni esterne, la UI composition diventa fondamentale per supportare un'architettura modulare e caricabile dinamicamente. |
| Prodotti multi-tenant con personalizzazioni estreme | Quando vendi lo stesso software a decine di clienti diversi, ciascuno con esigenze funzionali e visive molto diverse, ti serve un sistema flessibile per comporre l'interfaccia su misura a runtime. |
| Team numerosi e distribuiti geograficamente | Se il tuo team supera le 30-40 persone e lavora in sedi diverse, puoi valutare la divisione in moduli per ridurre le interferenze. Ma attenzione: solo se i moduli sono davvero indipendenti e i processi interni sono maturi. |
The point is this: for the same functional result for the end user, one approach costs you much more than the other in terms of time, money, risks and stress.
The “technologically advanced” choice is not always the best choice for business.
Indeed, it almost never is when we talk about applying complex architectures to problems that do not require them.
As a project or company manager, your goal should be to get the most value with the least waste of resources.
Choose simplicity, clean organization and the right tools for the specific job, it is the main way to achieve this goal.
Don't be seduced by complexity for its own sake.
