
One of the main problems that all developers and companies who decide to go from premise to cloud experience is the problem of costs.
On the one hand there is the old-fashioned premise, with its own servers locally, within the company network or in any case within the company infrastructure, and costs that are generally fixed and do not grow proportionally based on use, and on the other the cloud, in our case we are talking about Azure, which is a system that we decided to use 10 years ago now on all our company projects.
The advantage of on premise is that once the machines have been purchased and the costs of the IT department have been managed, they do not create further expenses and above all they do not create surprises.
In the sense that if I have a machine where IIS is installed inside, which runs an application, a SQL database and a Windows Service and other resources that run on the machine, those services do not have consumption costs.
The downside is that you have to maintain these machines, you have to maintain these services and there are serious chances that they will break down, maybe stop working in full use by the users, during the week or during the weekend or catastrophe, while you are on holiday.
Disks can be damaged, the machine can crash, and the service is interrupted without the possibility of quickly recovering the data and the system itself.
In some cases we have been contacted by customers who have irremediably lost all their data (database, contents, technical knowledge base) due to technical server failures: even going through the clean room of companies specializing in disaster recovery, there was nothing that could be done.
With the cloud on Azure it's a completely different story
Having a certified uptime through the Service Level Agreement (SLA) for some projects and some customers is an essential requirement, it means that the systems must be up most of the time, a time which is now around 99.98 or 99.97%, and there can be very short downtimes. To be clear, it means that in 1 month the service may not be available for a maximum of 14 hours, a time that is normally not acceptable for the majority of applications, especially if they are consecutive.
Please note that with 14 hours of unavailable service at a time, most of our customers would have their phones catch fire with support calls.
The cloud has the advantage of relieving the IT department of the company that provides the service from infrastructure management costs, on the other hand it can lead to incurring very high costs and above all unpredictable costs, which is the real problem.

It's like buying a car in those big American used car dealerships, you know? The ones with flags everywhere and a seller who is always hungry and ready to convince you to buy "the deal in front of you".

You smell a burning smell, but the seller is good, he knows how to answer all your objections, he shows you numbers, he assures you that the car is really reliable, he shows you the interior, the certified kilometres, but when it's time to know the price, you can't know, he replies with a sign with a big question mark on it.
I'll immediately explain how you can find this parallelism on Azure, but also on other cloud services.
Once you have entered the portal you can start activating resources, i.e. services such as App Services which are used to run APIs, or websites in ASP.NET MVC, and SQL Azure databases, Cosmos DB, artificial intelligence services and all those nice toys that Azure makes available.
The developer is curious by nature, like cats, and starts thinking about how he can leverage these services to improve the application or applications he is working on.
Then move on to evaluate new architectural solutions such as the creation of micro-services or the implementation of architectural parts such as security, the use of artificial intelligence, the use of queues and the use of buses, all the services it offers.
Be careful about cloud infrastructure costs with Azure
The problem in this case is that costs can quickly increase, without realizing it until at least the first payment, but even later after years of use you can have some nasty surprises as happened to us, due to services activated for testing, for debugging rather than scaling up to be able to manage load moments.
Afterwards these services are forgotten and very high costs are incurred, but you only find out later.
Some will say "well but there is Azure cost analysis which allows you to predict costs based on the activated services”.
It's true, but if you proliferate the services and continue to turn them on without an organization, you seriously risk incurring absolutely unforeseen costs due to the continuous activation of resources, perhaps oversized, and only discovering the real costs later.
It happened to us with one of our Cloud Service which was active but which was no longer needed because we had replaced it with the App Service, an App Service which has a whole series of advantages which we explain in the course on "The cloud with Microsoft Azure".
The trick to spending 300% less with an App Service
Until a few months ago we had exclusively used App Service on Windows because the prices on Linux were substantially identical, but now Microsoft is applying a 66% discount on group B, to encourage its adoption.
Subsequently we checked again from the Azure Pricing Calculator and there are big economic advantages, for the same machine divided by performance based on the type of processor, number of processors and amount of RAM memory.
This is the performance and pricing comparison table on Windows and Linux:
| Like | Core | RAM | Space | Windows | Linux |
|---|---|---|---|---|---|
| B1 | 1 | 1.75 GB | 10GB | €46 | €11 |
| B2 | 2 | 3.50GB | 10GB | €92 | €21 |
| B3 | 4 | 7.00 GB | 10GB | €184 | €43 |
| S1 | 1 | 1.75 GB | 50GB | €61 | €58 |
| S2 | 2 | 3.50GB | 50GB | €123 | €116 |
| S3 | 4 | 7.00 GB | 50GB | €246 | €233 |
| P1v2 | 1 | 3.50GB | 250GB | €123 | €104 |
| P2v2 | 2 | 7.00 GB | 250GB | €246 | €209 |
| P3v2 | 4 | 14.00 GB | 250GB | €492 | €418 |
Performance is better with Linux, by about 20%, for most services that must run in the cloud such as APIs and ASP.NET MVC applications.
void Main()
{
WebsiteExtensions.RemoveConnectionsLmit();
DownloadWebsite("https://linux.sviluppatore Migliore.com");
DownloadWebsite("https://windows.bestdeveloper.com");
}
public void DownloadWebsite(string url, int tests = 50)
{
using (var client = new HttpClient())
{
var result = client.GetAsync(url).Result.Content.ReadAsStringAsync().Result;
var stopwatch = Stopwatch.StartNew();
var count = 0;
Parallel.For(1, tests, x =>
{
result = client.GetAsync(url).Result.Content.ReadAsStringAsync().Result;
count++;
});
stopwatch.Stop();
(stopwatch.ElapsedMilliseconds / (double)count).Dump(url);
}
}
The average response time with 50 simulated concurrent users
44.81 milliseconds for Windows
34.77 milliseconds for Linux
Using the gateway pattern we are migrating old legacy APIs, developed with ASP.NET MVC, NHibernate and Castle as IoC system, towards .NET Core, with the native depency injection system of ASP.NET Core and Entity Framework, all on version 3.0 which is even faster than 2.2. From the first tests we found a truly notable performance boost of up to 500%.
We have done load tests with several hundred users which also allows Azure to do this directly from the management of the App Service within the control panel.
Think about how much you can save from the correct use of machines and their sizing if you implement a microservices architecture with at least 4 or 5 or even 10 App Services.
Simplified deployment with App Services
To be able to deploy with Linux from Visual Studio you need to activate Docker by selecting the web project and going to the "Add item" menu and then selecting "Docker support".
Only at that moment is the App Service Linux item activated in the Publish menu which allows you to deploy the application. It is very useful to be able to do this from Visual Studio because it performs an incremental update of the application, so if a few files have changed, as in the case of APIs where typically only a few DLLs change, in 2 or 3 minutes it is possible to update the application remotely.
For quality control and the best possible management we use Continuous Integration and Continuous Delivery, for which it is sufficient to commit from Visual Studio, because on all new projects we use GIT instead of TFS (you can find an article in which I explain the differences between GIT and TFS) to trigger the automatic build on the Microsoft DevOps developer portal.
Microsoft DevOps performs several steps:
- Download the source code from scratch on a dedicated space
- Restore Nuget packages
- Run the build
- Runs automatic tests if there are any
- It then deploys the application to the App Service
- Finally, the artifacts are published, i.e. all the files compiled to be able to download and analyze what is produced
I absolutely advise you to do automatic tests and implement unit tests to make increasingly safer commits, without regressions, being able to fix all the bugs with very little effort and gradually acquire maximum confidence with your and the team's projects.
This way you eliminate "performance anxiety" and achieve victory with maximum serenity, like Novac Djokovic's recent victory at Wimbledon, who won after 4 hours and 55 minutes of battle with Feder. Has he lost his temper? Never, this made him win.
The devil is in the details
Switching from the Windows App Service to the Linux one might be thought to be painless, but the reality is that this is not the case, re-deploying a clean project on another machine from the point of view of the code and the use of Nuget packages is not a painless change. There are several problems that are also difficult to identify, because if the application does not start and does not show exceptions, not even in the logs, you have to use intuition and experience.
We then sift through the Azure logs and analyze the code to understand which points can cause problems, also because it is not possible to remotely debug the App Service on Linux, unlike Windows machines.
Experience makes all the difference, the process perhaps requires hours of testing and study of the systems in order to be able to use it or make the best use of it and above all to start immediately with optimized solutions.
You can do this by investing your time and therefore your money or the company's money and doing all the necessary tests. Or you can use a performance boost in this case too, which allows you to avoid making all those mistakes that someone before you has already made and paid for at their expense.
We transfer this experience through a course that allows you to go much faster and much more safely in moving from on premise services to cloud services.
