MCP and AI Agents in .NET for Manufacturing 2026
Matteo Migliore

Matteo Migliore is an entrepreneur and software architect with over 25 years of experience developing .NET-based solutions and evolving enterprise-grade application architectures.

He has led enterprise projects, trained hundreds of developers, and helped companies of all sizes simplify complexity by turning software into profit for their business.

Everyone is talking about AI in manufacturing. At industry conferences, in trade publications, in press releases from automation vendors. AI will transform manufacturing, AI will revolutionize production, AI will change everything.

But there is an enormous difference between "putting a chatbot on the website" and building AI agents that actually communicate with your production systems.

Imagine a plant manager at 6 in the morning, before even entering the factory, asking their phone: "How did the night go?" And receiving a detailed answer: "Line 2 had a 23-minute stoppage at 2:14 AM due to a temperature alarm on the treatment furnace. Batch 5821 was completed at 97.3% efficiency. There are 3 late orders that need your attention."

This is not science fiction. It is what MCP makes possible today, connecting language models to real company systems. The technology exists. The .NET libraries exist. The standard is open and stable.

The problem is that most manufacturing companies do not know this yet. And most .NET developers have not yet realized that this is the most concrete professional opportunity in the industrial sector in recent years.

Until now, AI in manufacturing meant two things: computer vision systems for quality control (expensive, vertical, difficult to generalize) and company chatbots that know the manuals but know nothing about your production from yesterday.

MCP changes the rules of the game. It is not another AI technology to learn from scratch. It is an open, standard protocol that defines how any language model can connect to any company system in a uniform way. A "USB-C for artificial intelligence," according to those who designed it.

Manufacturing companies that implement this architecture in the next 18 months will have a competitive advantage that will not be easy to close. Others will follow. The only question is when.

In this article I will show you how the MCP plus AI agents architecture actually works in industrial settings, what can be built today with .NET, what the concrete risks are that should not be underestimated, and how to start a pilot project that delivers real value in 90 days.

If you develop in .NET and have even a passing interest in the industrial world, this is the moment to read carefully.

Why generic AI does not work in the factory: the closed data problem

ChatGPT knows the world. It understands OEE, knows best practices for predictive maintenance, and can explain the difference between a MES and an ERP system with encyclopedic precision.

But it knows nothing about your factory.

It does not know that line 3 had a reducer vibration issue for three weeks in February. It does not know that the morning shift efficiency is on average 4% higher than the night shift. It does not know that the main customer modified the tolerance requirements on component B7 and this is impacting cycle times.

The data that matters, yesterday's production, the state of the machines, unplanned stoppages, out-of-spec batches, all of it lives in your internal systems. In the SCADA. In the MES. In the ERP. In the Excel spreadsheets that the shift supervisor fills out every morning.

Without access to this data, AI is blind to your operational reality. It can help with general procedures, documentation, and theoretical explanations. But it cannot answer the question that actually matters: "What is happening on line 2 right now?"

The problem is not the quality of AI models. The models are excellent. The problem is data access.

Until November 2024, connecting an LLM to company systems meant building custom integrations for every system-model pair. Wanted to connect Claude to your SCADA? One integration. Wanted to connect Copilot to your MES? Another integration. Wanted to switch from Claude to GPT-4? Rewrite everything.

This approach does not scale. It does not scale in terms of development costs, it does not scale in terms of maintenance, and it does not scale when the AI model changes or when you add a new business system.

MCP solves exactly this problem.

Model Context Protocol is an open standard that defines a universal interface for connecting any LLM to any external data source or tool. The logic is that of the USB protocol: before USB, every peripheral had its own proprietary connector. After USB, one standard interface connects everything to everything.

With MCP, you write an "MCP server" that exposes your SCADA data as tools accessible via standard protocol. Any MCP-compatible AI agent, whether Claude, Copilot, Cursor, or a custom agent in .NET, can use those tools without knowing the details of your industrial system.

The separation of responsibilities is clear: the MCP server knows the industrial data but knows nothing about AI. The AI agent knows reasoning but knows nothing about your specific SCADA. MCP is the interface that makes them talk.

The practical advantage for a manufacturing company is significant. You build the MCP server for your SCADA once. From that point on, any AI tool that supports MCP can access your production data without additional integration work.

When Microsoft updates Copilot with new capabilities, they will work with your data. When Anthropic releases an improved version of Claude, it will work with your existing infrastructure. When your company decides to change AI providers for cost or policy reasons, you change only the client, not the entire integration.

This is MCP's value proposition in the industrial world: decoupling intelligence (the model) from data (the systems), with a standard interface that makes integration maintainable over the long term.

And for a sector like Italian manufacturing, where legacy systems must coexist with new technologies for years or decades, this is not a technical detail. It is the difference between a sustainable investment and a dead end.

What is an AI agent and how does it work in a manufacturing context

The word "agent" is used loosely in the AI world. It is worth clarifying what it means in the context of this article, because the distinction from a traditional chatbot is deep and has concrete implications for what you can do and how you build it.

A chatbot answers questions. It takes the text you give it, processes it, and returns a response. It is fundamentally a text-to-text transformation mechanism, even when it is very sophisticated.

An AI agent executes tasks. It receives a goal, breaks it down into steps, uses tools to gather information and act, verifies whether it has achieved the goal, and corrects its path if necessary. It is a system that reasons and acts iteratively.

The cycle of an AI agent consists of four phases that repeat:

Perception. The agent receives the goal from the user and gathers the initial context. In manufacturing: the user asks "Why did the OEE on line 3 drop below 75% this week?" The agent starts by collecting data from the MES.

Reasoning. The LLM analyzes the available data and decides the next step. "I have production data but I am missing details on stoppage causes. I need to query the SCADA alarm management system."

Action. The agent calls the appropriate MCP tool to retrieve the missing data or execute the necessary operation. The SCADA responds with the alarm data from the last 7 days.

Verification. The agent evaluates whether it has enough information to respond to the original goal. If not, it returns to the reasoning phase and plans the next step.

This cycle continues until the agent reaches the goal or decides it does not have enough tools to proceed.

The practical difference compared to a chatbot is enormous. A chatbot can explain what OEE is. An AI agent can:

Query the MES to retrieve the week's production data. Identify that OEE dropped 6% compared to the previous week. Query the SCADA to find unplanned stoppages. Discover that there were 4 stoppages on line 3 due to temperature alarms. Correlate with maintenance data to verify if the furnace air filter was due for replacement. Present to the manager a complete diagnosis with estimated probability for each root cause and a proposed corrective action.

All this in response to a single question, without the user needing to open 3 different systems, export data to Excel, and analyze it manually.

For Italian manufacturing, where production managers spend hours each day collecting data from different systems to prepare reports that are already outdated by the time they are written, this represents a significant operational change.

The potential does not stop at reports. A monitoring agent can continuously analyze SCADA data and send proactive alerts when it detects anomalous patterns in temperature, vibration, or energy consumption trends, well before a formal alarm is triggered. A scheduling agent can optimize the sequence of production orders simultaneously considering material availability, machine capacity, customer priorities, and setup constraints.

This is not about replacing the human operator. It is about amplifying their decision-making capacity, giving them immediate access to already-processed relevant information, instead of forcing them to collect and aggregate it manually.

MCP as infrastructure for AI agents in industrial settings

Understanding the MCP architecture at a technical level is fundamental to building systems that work in production. Let us look at the main components and how they interact.

MCP Server. This is the component that exposes data and functionality to clients. In a manufacturing context, you write an MCP server that connects to your SCADA via OPC-UA, to your MES via its APIs, to the ERP via its web services. The server exposes this data as "tools" that agents can call. Tools have a name, description, and parameter schema. The agent reads the tool descriptions and autonomously decides which one to call based on the goal.

MCP Client. This is the component integrated in the AI agent that knows how to communicate with MCP servers. Claude has a native MCP client. Microsoft Copilot supports MCP through connectors. A custom agent in .NET uses the ModelContextProtocol SDK to implement the client.

Host. This is the application that hosts the agent and manages communication between the MCP client and the user interface. It can be a Blazor web application, a REST API, a background service that processes automatic alerts, or a desktop tool for line operators.

The architectural advantage of MCP in manufacturing becomes clear when you consider the reality of a company with heterogeneous systems. A typical plant has a SCADA for production control, a MES for order management and traceability, an ERP for planning and accounting, perhaps a separate maintenance management system (CMMS), and historical data in databases or files.

With the traditional approach (custom integrations), connecting an AI agent to all these systems means writing and maintaining N separate integrations, one for each system. If you change the AI model, you rewrite everything. If you change a system, you update the integration.

With MCP, each system exposes a standardized MCP server. The AI agent talks to all systems through the same protocol. Change AI model? The MCP server does not change. Add a new system? Write a new MCP server, the agent detects it automatically.

This separation of responsibilities has a direct impact on the long-term sustainability of the project. In manufacturing, systems last decades. A SCADA installed today will still be in production 15 years from now. Building integrations that are maintainable over that time span is not a luxury: it is a requirement.

An often underestimated additional advantage: MCP supports dynamic tool discovery. An agent can ask the MCP server which tools are available and use them without them being statically configured. This makes it possible to build generic agents that adapt to the available systems in the specific context, without requiring a rewrite for each installation variant.

Building an AI agent for manufacturing with .NET: the complete architecture

Let us look at the concrete technology stack for building an AI agent that connects to industrial systems via MCP, using .NET as the development platform.

Microsoft.Extensions.AI is the Microsoft library for LLM model abstraction in .NET. It defines standard interfaces such as IChatClient that work with Azure OpenAI, direct OpenAI, Anthropic Claude, and other providers. The advantage is the ability to change LLM providers without modifying application code, exactly as ILogger abstracts the logging system.

Semantic Kernel is the Microsoft framework for AI agent orchestration in .NET. It manages the agent's reasoning cycle, tool selection and invocation, conversational memory management, and LLM integration through Microsoft.Extensions.AI. For manufacturing, it is the component that implements the "plan, execute, verify" logic of the agent.

ModelContextProtocol SDK is the official NuGet package for implementing MCP clients and servers in .NET. On the server side, it allows tools to be exposed by annotating C# methods with attributes. On the client side, it allows the agent to discover and invoke tools on connected MCP servers.

The project structure is organized into three distinct layers:

MCP Server layer (data access). One or more .NET projects that connect to industrial systems. The MCP server for the SCADA uses OpcUaNetStandard to read tags and alarms. The MCP server for the MES calls the system's REST APIs or accesses the database directly. Each server exposes tools with descriptive names such as GetLineEfficiency, GetUnplannedStoppages, GetOrderStatus.

Agent Host layer (orchestration). The project that hosts Semantic Kernel and manages the agent cycle. It receives the user's request, passes it to the agent, monitors execution, and collects the result. It can be an ASP.NET Core service with REST endpoints or a background service for automatic monitoring.

UI layer. The interface toward end users. For a conversational assistant for the plant manager: a Blazor web application with a chat interface. For automatic alerts: integration with Teams or email. For operational dashboards: REST API consumed by the existing visualization.

// MCP client and Semantic Kernel agent registration
builder.Services.AddMcpClient(options =>
{
    options.AddServer("scada", new McpServerOptions
    {
        Command = "ScadaMcpServer",
        TransportType = McpTransportType.StdIo
    });
    options.AddServer("mes", new McpServerOptions
    {
        Url = "http://mes-mcp-server:5000/mcp",
        TransportType = McpTransportType.Http
    });
});

builder.Services.AddKernel()
    .AddAzureOpenAIChatCompletion(
        deploymentName: config["AzureOpenAI:DeploymentName"],
        endpoint: config["AzureOpenAI:Endpoint"],
        apiKey: config["AzureOpenAI:ApiKey"])
    .AddMcpToolsFromServices();

The code above shows system registration. MCP servers are registered with their transport type (StdIo for local processes, HTTP for remote servers). Semantic Kernel is configured with the LLM model and MCP tools are added automatically through AddMcpToolsFromServices().

On the MCP server side, tool definitions are equally simple thanks to attributes:

// Example MCP tool for the SCADA
[McpServerTool, Description("Returns the OEE efficiency of a production line for a time interval")]
public async Task<LineEfficiencyResult> GetLineEfficiency(
    [Description("Line identifier (e.g., LINE_2)")] string lineId,
    [Description("Period start in ISO8601 format")] DateTime from,
    [Description("Period end in ISO8601 format")] DateTime to)
{
    var tags = await _opcUaClient.ReadTagsAsync(lineId, from, to);
    return CalculateOee(tags);
}

The agent reads the tool description, understands what it does, and autonomously decides whether and when to call it based on the goal it needs to achieve. Descriptions are critical: the more precise and contextualized to the manufacturing domain they are, the more correct choices the agent will make.

A critical aspect for industrial production: error handling and timeouts. SCADA systems can have variable latencies, OPC-UA servers can be temporarily unreachable, and the MES can have maintenance windows. The MCP server must handle these cases gracefully, returning structured errors that the agent can interpret and report to the user instead of propagating exceptions.

Concrete examples of AI agents in production: what works today

Let us leave theory behind. Here are three concrete scenarios of AI agents solving real problems in factories, with practical considerations for each.

Case 1: the virtual assistant for the plant manager.

The manager starts the shift by asking the agent: "Give me a briefing on the current situation." The agent queries the MES for open orders and delays, the SCADA for line status and active alarms, and the maintenance system for the day's scheduled interventions. It returns a structured summary in natural language with the critical points requiring attention.

During the shift, the manager can ask specific questions: "Why is line 4 at 68% efficiency?" The agent analyzes data from the last 2 shifts, identifies the main stoppages, correlates with alarm data, and responds with a diagnosis with estimated probabilities for each cause.

The value of this agent is not in answering obvious questions. It is in eliminating the time the manager spends every morning opening different systems, exporting data, and comparing it manually. One hour per day saved for the manager amounts to tens of thousands of euros in productivity over a year.

Case 2: the proactive anomaly monitor.

This agent does not wait for questions: it runs in the background, periodically analyzing SCADA data to detect anomalous patterns.

It does not simply relay existing alarms from the system. It does something more sophisticated: it analyzes trends. A temperature rising by 0.3 degrees every hour does not trigger an alarm today, but in 8 hours it will exceed the critical threshold. A monitoring agent that analyzes this trend can notify the maintenance technician in advance, enabling preventive action during the next scheduled maintenance window instead of managing an unplanned stoppage at the worst possible moment.

The notification includes not just the anomaly but the context: "The temperature of motor M-14 on line 3 is increasing at a rate of 0.28C/hour. At the current rate, it will reach the alarm threshold (85C) in approximately 7 hours. History: this machine had a bearing failure 14 months ago with a similar pattern. Recommendation: check lubrication and bearing condition at the next scheduled stop (2:00 PM)."

Case 3: the order scheduling optimizer.

Production scheduling is a complex optimization problem. The planner must balance customer priorities, material availability, machine capacity, setup times, and delivery deadlines. This is typically done with spreadsheets and a lot of experience.

An agent connected to the MES and ERP can analyze the situation of open orders, the actual availability of lines (including planned maintenance stoppages), material inventory, and propose an optimized sequence with the explanation of the reasoning.

This is not about replacing the planner. The agent proposes, the planner reviews and approves. But instead of starting from scratch every morning, the planner starts from a proposal already consistent with all known constraints, and uses their experience to adjust the special cases that require human judgment.

These three cases share something: they do not replace existing systems, they sit on top of them. The SCADA continues to do process control. The MES continues to manage traceability. The AI agent is an intelligent interface that makes these systems more accessible and their value more immediately usable.

Security of AI agents in industrial OT environments: risks not to underestimate

Security in OT (Operational Technology) environments is a topic that cannot be treated as a footnote. An industrial control system that stops or behaves unexpectedly can cause physical damage, significant economic losses, and in the most serious cases, risks to personal safety.

When you introduce an AI agent connected to industrial systems, the risks change in nature compared to a traditional software application. It is not just about bugs or cybersecurity vulnerabilities. It is about a system that makes decisions and potentially executes actions autonomously, based on reasoning that is not always predictable.

The fundamental principle is this: no AI agent should be able to send direct control commands to PLCs or SCADA systems without explicit supervision by a human operator.

This means the architecture must be designed with this separation as a non-negotiable constraint. MCP tools exposed to the read system must be architecturally separated from write/command tools. Read tools are used freely by the agent to collect data. Write tools require an explicit approval pattern: the agent proposes the action, the operator approves via a dedicated interface, only after approval is the action executed.

Let us look at the concrete risks to address:

Tool privilege escalation. If a read MCP tool also exposes write functionality "for convenience," an agent might use it unexpectedly. Each tool must have a precise, minimal scope. A tool that reads valve status must not be able to modify it, not even "as a service."

Prompt injection. An attacker could modify data in an industrial system (a notes field in a MES record, the name of a batch) to include instructions for the AI agent. This is a real risk in systems where data comes from not fully controlled sources. The solution is context validation: MCP tools must never return raw, unsanitized data.

Decision latency. In environments where conditions change rapidly (a chemical process, a high-speed line), an agent that takes 30 seconds to reason and respond might propose actions that are already obsolete. The architecture must clarify which decisions can be delegated to the agent (diagnostic analysis, reports) and which require real-time control systems (PID control loops, safety systems).

Audit logs and traceability. Every action proposed by the agent, every operator approval, every MCP tool call must be recorded with a timestamp, user, and complete payload. In regulated sectors (pharmaceutical, food) this is a regulatory requirement. In others it is an essential best practice for diagnosing problems and demonstrating compliance.

Capability sandboxing. The agent must have access only to the systems and data necessary for its specific purpose. An agent reporting on production does not need access to the physical access management system. Apply the principle of least privilege to AI agents as well.

The network must be segmented: the MCP server that accesses the SCADA must reside in the OT network DMZ, with controlled access from both the IT and OT networks. There must be no direct path between the AI agent (which might be exposed on the internet for mobile access) and the PLCs.

These security requirements are not obstacles to adoption. They are the framework that makes adoption sustainable and reliable. A company that implements MCP and AI agents respecting these principles builds a system it can rely on. One that ignores them creates vulnerabilities.

Integration with Copilot Studio and Azure AI Foundry: for those who do not want to start from scratch

Not every manufacturing company has an internal .NET development team capable of building an AI agent system from scratch. For many Italian SMEs, this is the most common situation. And Microsoft's response to this scenario is concrete.

Microsoft Copilot Studio is the no-code/low-code platform for creating custom AI agents in the Microsoft 365 ecosystem. Starting in 2025, Copilot Studio natively supports MCP connectors. This means you can take a custom .NET MCP server that exposes your SCADA data, register it as a connector in Copilot Studio, and create an agent that uses that data without writing code for orchestration.

The advantage is setup speed and integration with Teams, SharePoint, and Outlook. The plant manager can query production data directly from Teams, using the interface they already know. No new application is needed, no training on new tools.

The limitation is customization. Copilot Studio handles structured conversation flows well, but has constraints on complex orchestration logic, deep multi-step reasoning, and integration with non-Microsoft systems.

Azure AI Foundry (the evolution of Azure AI Studio) is Microsoft's platform for custom AI agents on Azure OpenAI models. It allows defining agents with advanced reasoning capabilities, configuring tools and MCP connections, and deploying agents as Azure services. It is the choice for those who want the flexibility of code with the advantage of Microsoft-managed infrastructure.

How do you choose between Copilot Studio, Azure AI Foundry, and custom .NET development?

Copilot Studio is the right choice if your company is already in the Microsoft 365 ecosystem, you want a proof-of-concept in 2-3 weeks, your use case is primarily conversational, and you have no deep customization requirements. The limitations emerge when you have complex business logic, need integration with non-standard legacy systems, or have stringent OT security requirements.

Azure AI Foundry makes sense if you want to use Azure OpenAI models with enterprise support (SLA, European data residency, compliance), need a middle ground between no-code and full custom development, and your team has Azure cloud skills.

Custom .NET development with Semantic Kernel and the MCP SDK makes sense if you have deep customization requirements, need to integrate legacy systems with non-standard APIs, have OT security requirements that demand full architectural control, or are building a product you want to maintain for years. It has the highest initial cost but the greatest long-term flexibility.

In many real projects the answer is hybrid: a custom .NET MCP server for legacy industrial system access, Azure AI Foundry or Copilot Studio for agent orchestration, and a Teams interface for users. Write custom code only where strictly necessary, and use managed platforms for the rest.

For an Italian manufacturing SME with a limited budget, this hybrid approach allows a first functional agent to be delivered in 4-6 weeks with a reasonable investment, and then evaluate whether to invest in custom development based on the real results of the pilot.

The ROI of AI agents in the factory: how to measure value and convince management

Any technology project in a manufacturing company must answer a simple question: how much is it worth? Decision makers in Italian manufacturing are pragmatic. They do not invest on visions of the future: they invest on demonstrable returns.

The problem with AI agents is that value is often difficult to quantify upfront, because it depends on how much time people currently spend doing things the agent could do better and faster. You need to start by measuring the current problem.

Data collection time for operational reports. How much time does the production manager spend every morning collecting the night's data and preparing the briefing for management? In many medium-sized Italian manufacturing companies, this activity takes 45-90 minutes per day. An AI agent that does this collection automatically and presents it in a structured way recovers that hour every day. For a manager with a company cost of 60,000 euros per year, that daily hour is worth approximately 15,000 euros in recovered productivity per year. Just from this use case.

MTTR (Mean Time To Repair) and unplanned stoppages. How long does an average unplanned stoppage last on your main line? What is the hourly cost of that stoppage? For a line with 10 million euros in annual revenue operating on 2 shifts, the hourly cost of a stoppage is around 4,000-6,000 euros for lost added value alone, excluding overtime for recovery.

A monitoring agent that detects anomalies in advance and reduces MTTR by 20% on a line with 50 hours of unplanned stoppages per year (a common figure) is worth 40,000-60,000 euros in avoided stoppages. This is a measurable figure, comparable before and after implementation.

OEE and scheduling optimization. On a line with 10 million euros in annual revenue, each percentage point of OEE is worth approximately 100,000 euros. An agent that optimizes order scheduling, reducing unnecessary setup times and minimizing material waits, can reasonably contribute 0.5-1.5 additional OEE points in 12 months. That is worth 50,000-150,000 euros.

How to present these numbers to management? The three-step framework:

First: quantify the cost of the current problem. Have the production manager make an honest estimate of how much time they spend collecting data instead of analyzing it. Measure the frequency and average duration of unplanned stoppages. Calculate current OEE and compare it to the industry best-in-class.

Second: propose a pilot with measurable objectives. Not "implement AI in the factory." But "reduce the morning briefing preparation time from 60 to 10 minutes" or "reduce average MTTR by 15% in 6 months." Specific, measurable objectives, with a baseline defined before the start.

Third: size the pilot so that ROI is demonstrable even with partial results. A 50,000-euro pilot that reduces stoppages by 10% on a line where a stoppage costs 5,000 euros per hour has a payback in less than 2 months if the reduction is only 20 hours of stoppages per year. The numbers must hold up even in the conservative scenario.

How to start an AI agent pilot with MCP in 90 days

Ninety days to demonstrate real value. This is the framework I recommend for any Italian manufacturing company that wants to get started with MCP and AI agents without risking significant investment in technologies not yet validated in their own context.

The guiding principle is simple: real value in 90 days, or everything stops. No endless pilots that produce PowerPoint decks. No lab experiments disconnected from operational reality. An agent in production, with real users, answering real questions about real data.

Weeks 1-4: foundations and inventory.

The first month is dedicated to understanding what exists and building the technical foundations. The main activity is system inventory: which systems exist (SCADA, MES, ERP, others), what data they expose, how to access them (REST API, OPC-UA, direct database, file export), who is the technical owner of each.

Simultaneously, identify the 3-5 pilot users: the production manager, the most tech-savvy shift supervisor, the maintenance manager. Conduct interviews to understand the 3 questions they ask systems most often and for which they spend the most time finding answers. These become the pilot use cases.

On the technical side: development environment setup, choice of LLM model (Azure OpenAI is the safest choice for Italian SMEs for data residency and enterprise support reasons), implementation of the first read-only MCP server connected to the system with the most relevant data. Read-only only: no write tools in the first 4 weeks.

Weeks 5-8: first agent in production.

With the foundations in place, the first agent is built. Not the most sophisticated: the most useful. Typically a Q&A assistant that answers the 3 questions identified in the previous weeks, using data from the MCP server.

The interface should be as simple as possible: a web chat accessible from a browser, preferably integrated with Teams if the company already uses it. The goal is not to make a good impression with the technology: it is to get people to use the tool.

During weeks 5-8, continuous feedback is collected from pilot users. What works? What does not work? Which questions does the agent answer poorly? The agent's system prompt is iterated every week based on real feedback.

A critical element: do not hide errors. If the agent answers a question incorrectly, you want to know immediately. Put an explicit feedback mechanism (thumbs up/down) and discuss errors with pilot users in the weekly review.

Weeks 9-12: measurement and decision.

The last month is dedicated to measuring results against the objectives defined at the start and making the decision on how to proceed.

Measure how much time pilot users save using the agent instead of traditional systems. Record answer quality (percentage of correct vs. incorrect answers, based on collected feedback). Calculate preliminary ROI based on time saved and the economic baselines identified before the pilot.

The final decision has three possible options: extend the pilot to more users and more use cases (the pilot demonstrated value, invest more), restructure the pilot on different bases (the value is there but the use case was not the right one), stop (the value is not demonstrable in the company's specific context, better to use resources elsewhere).

The third option is legitimate. Not every company is ready for AI agents at this moment. The reasons can be the quality of data in existing systems, the maturity of the IT team, or cultural resistance to change. Better to discover this after 90 days with a contained investment than after 2 years and 500,000 euros of project spend.

The 90-day framework works because it forces difficult decisions to surface early. If the SCADA data is too noisy to be used by an AI agent, you discover it in week 3, not week 40. If users do not use the tool despite enthusiastic briefings, you discover it in week 6. This is valuable information that guides future investment.

The Italian manufacturing market and the missing skills: why now is the right time for a .NET developer

Let us close with a market perspective, because this article is also written for .NET developers who are evaluating where to invest their learning time in the next 12-18 months.

Manufacturing is the sector with the highest GDP contribution in Italy after services. Italian manufacturing SMEs are the heart of the country's economic system: precision mechanics, automotive components, pharmaceuticals, packaging, food processing. All of these companies have SCADA systems, MES, ERP. All of them are feeling the competitive pressure of Industry 4.0. Almost none have an internal team capable of building AI agents.

The professional profile that is needed is a hybrid one: someone who knows .NET, understands at least the principles of industrial systems (OPC-UA, MES, SCADA), and can build AI applications with MCP and Semantic Kernel. This profile practically does not exist on the Italian market in 2026.

Those who build it in the next 12 months will be positioned in a market with high demand and almost zero supply.

The skills to acquire, in priority order:

Microsoft.Extensions.AI and Semantic Kernel. Microsoft's framework for AI agents in .NET is mature, well-documented, and supported over the long term. It is the foundation to build on.

ModelContextProtocol SDK. The official .NET library for MCP. Learning it takes a couple of weeks, but it is the most differentiating skill in the short term.

OPC-UA and SCADA systems. You do not need to become an automation engineer. You need to understand how they work well enough to build MCP servers that read data from these systems. The article on how to build SCADA systems with C# and .NET is the starting point.

Azure OpenAI and Azure AI Foundry. For Italian manufacturing SMEs, Azure is the most realistic cloud platform: local presence, enterprise support, European data compliance. Knowing how to deploy agents on Azure AI Foundry is a commercially relevant skill.

The right time to start is now. MCP is mature enough to be reliable in production, but new enough that there is not yet a critical mass of specialized developers. This window will not stay open forever.

Italian manufacturing companies are starting to ask their first questions on these topics. Those who arrive at these conversations already prepared, with a working prototype and the competence to guide a pilot, will find very fertile ground.

AI is not a fad. LLMs are not the solution to everything. But connecting artificial intelligence to the real data of manufacturing companies via MCP is one of the most concrete, high-impact technical opportunities of the coming years. And it requires exactly the skills that .NET developers already have, plus a targeted investment in MCP and Semantic Kernel.

It is worth doing.

Frequently asked questions

MCP (Model Context Protocol) is an open standard developed by Anthropic in November 2024 that defines how a language model (LLM) can connect to external data and tools in a standardized way. It is often described as the USB-C for artificial intelligence: a universal interface that allows any compatible AI agent (Claude, Copilot, Cursor and others) to access SCADA, MES, ERP systems and databases without custom integrations for each combination. In an industrial context, MCP solves the 'blind AI' problem: it gives the agent access to the company's real data.

A chatbot answers questions based on its pre-trained knowledge and text it is given. An AI agent executes multi-step tasks autonomously: it receives a goal, plans the necessary steps, uses tools (MCP tools) to retrieve real data from company systems, verifies the results and iterates until completion. In a manufacturing context, a chatbot can explain what OEE is. An AI agent can calculate the current OEE of line 2 by querying the MES, identify the causes of losses and propose corrective actions to the supervisor.

The recommended stack for AI agents in .NET includes: Microsoft.Extensions.AI for LLM model abstraction (works with Azure OpenAI, OpenAI, Anthropic and others), Semantic Kernel for agent orchestration and reasoning cycle management, and the official ModelContextProtocol SDK (available on NuGet) for exposing MCP tools and connecting to existing MCP servers. For the industrial data access layer, OpcUaNetStandard is used for SCADA/PLC and specific libraries for MES and ERP.

It should not, and the correct answer is no without explicit human supervision. The fundamental principle for OT security is 'human in the loop': the agent has read access to control systems to collect data, can analyze situations and propose actions, but any command toward control systems requires explicit approval from a human operator. No AI agent should be able to send direct control commands to PLCs or SCADA systems without supervision. The agent is an intelligent interface, not an automatic control system.

The cost depends on the complexity of the integration and existing systems. A reasonable pilot project (Q&A agent on production data, read-only MCP server connected to a SCADA or MES system) requires 4-8 weeks of development with a team of 1-2 experienced .NET developers. The main variable costs are API calls to the LLM model (Azure OpenAI or Anthropic Claude), which for typical use by a plant manager amount to a few hundred euros per month. The alternative is to use Microsoft Copilot Studio for a faster but less customizable solution.

Azure AI Foundry is Microsoft's platform for building custom AI agents on Azure OpenAI models, with native support for MCP and orchestration tools. Copilot Studio is the no-code/low-code alternative that allows agents to be created with MCP connectors without writing code. Copilot Studio is better for rapid proof-of-concept and companies without a dedicated development team. Azure AI Foundry and the .NET stack with Semantic Kernel are better when deep customization is needed, integration with legacy systems, complex business logic, or stringent security requirements typical of the OT environment.

The main metrics for measuring the ROI of AI agents in a factory are: reduction in data collection time (from hours to minutes for operational reports), improvement in MTTR (Mean Time To Repair) thanks to proactive alerts with root cause analysis, OEE increase through scheduling optimizations. As a reference: on a line with 10 million euros in annual revenue, each percentage point of OEE is worth approximately 100,000 euros. A 20% reduction in MTTR on a line where a stoppage costs 5,000 euros per hour is worth tens of thousands of euros per year.

Leave your details in the form below

Matteo Migliore

Matteo Migliore is an entrepreneur and software architect with over 25 years of experience developing .NET-based solutions and evolving enterprise-grade application architectures.

Throughout his career, he has worked with organizations such as Cotonella, Il Sole 24 Ore, FIAT and NATO, leading teams in developing scalable platforms and modernizing complex legacy ecosystems.

He has trained hundreds of developers and supported companies of all sizes in turning software into a competitive advantage, reducing technical debt and achieving measurable business results.

Stai leggendo perché vuoi smettere di rattoppare software fragile.Scopri il metodo per progettare sistemi che reggono nel tempo.