Microsoft in the conference that ended on September 25, 2019 announced the release of .NET Core 3.0.
It includes many improvements, including the addition of Windows Forms and WPF, the addition of new JSON APIs, support for ARM64, and performance improvements across the board. C# 8 is also part of this release, which includes nullable, async streams, and other patterns. F# 4.7 is included and focused on relaxing syntax and targeting .NET Standard 2.0. Today you can start updating existing projects to target .NET Core 3.0. The release is backwards compatible, making it easy to upgrade.
Watch the team and community talk about about .NET at .NET Conf, live NOW!
You can download download .NET Core 3.0, for Windows, macOS, and Linux:
ASP.NET Core 3.0 e EF Core 3.0 for Mac 8.3 were released today and require updates to use Visual Studio 2019 16.3 e Visual Studio for Mac 8.3 also released today and updates are required to use .NET Core 3.0 with Visual Studio. .NET Core 3.0 is part of Visual Studio 2019 16.3. You can simply get .NET Core by simply updating Visual Studio 2019 16.3.
Thanks to everyone who contributed to .NET Core 3.0! Hundreds of people were involved in making this release, including major community contributions.
What you should know about 3.0
There are some key improvements and pointers that are important to focus on before diving into all the new features in .NET Core 3.0. Here's the quick list of punches.
- .NET Core 3.0 has already been battle tested by being hosted for months on dot.net and up Bing.com.. Many other Microsoft teams will soon be deploying large workloads on .NET Core 3.0 in production.
- Performance is significantly improved in many components, and performance improvements in .NET Core 3.0 are detailed.
- C#8 adds Asyncs streams, range/index, plus other nullable patterns and reference types. Nullable allows you to directly target defects in your code that they lead to
NullReferenceExceptionThe lowest level of the framework libraries have been annotated, so you know when to expect nulls. - F#4.7 focuses on making some things easier with implicit rendering expressions and some syntax relaxations. It also includes support for LangVersion and comes with naming and opening static classes in preview. The F# Core library now also targets .NET Standard 2.0. You can read more at Announcing F#4.7.
.NET Standard 2.1 increases the set of types that you can use in code that can be used with .NET Core and Xamarin. .NET Standard 2.1 includes types from .NET Core 2.1.
- Windows desktop apps are now supported with .NET Core, both for Windows Forms and WPF (and open source). The WPF designer is part of Visual Studio 2019 16.3. The Windows Forms Designer is still in preview and available as a VSIX download.
- .NET Core apps now have executables by default. In previous versions, apps had to be launched via the dotnet command, such as dotnet myapp.dll. Apps can now be launched with an app-specific executable, such as myapp or ./myapp, depending on the operating system.
- High performance JSON APIs have been added, for read/write, object model and serialization scenarios. These APIs were built from scratch on top of Span
and use UTF8 under covers instead of UTF16 (like the string). These APIs minimize allocations, resulting in faster performance and much less work for the garbage collector. See The Future of JSON in .NET Core 3.0. - The garbage collector uses less memory by default, often much less. This improvement is very useful for scenarios where many applications are hosted on the same server. The garbage collector has also been updated to better utilize large numbers of cores, on machines with >64 cores.
- .NET Core has been enhanced for Docker to allow .NET applications to run predictably and efficiently in containers. The garbage collector and thread pool have been updated to work much better when a container has been configured for limited memory or CPU. The .NET Core docker images are smaller, especially the SDK image.
- Raspberry Pi and ARM chips are now supported to enable IoT development, including with the remote Visual Studio debugger. You can deploy apps that listen to sensors and print messages or images to a screen, all using the new GPIO APIs. ASP.NET can be used to expose data as an API or as a site that allows configuration of an IoT device.
- .NET Core 3.0 is a "current" release and will be replaced by .NET Core 3.1, scheduled for November 2019. .NET Core 3.1 it will be a long-term supported (LTS) release (supported for at least 3 years). We recommend adopting .NET Core 3.0 and then adopting 3.1. It will be very easy to update it.
- .NET Core 2.2 will go EOL on 12/23 as it is now the previous "current" version. See the .NET Core support policy.
- .NET Core 3.0 will be available with RHEL 8 in Red Hat Application Streams, following several years of collaboration with Red Hat.
- Visual Studio 2019 16.3 is a required update for Visual Studio users on Windows who want to use .NET Core 3.0.
- Visual Studio for Mac 8.3 is a required update for Visual Studio for Mac users who want to use .NET Core 3.0.
- Visual Studio Code users should always use the latest version of the C# extension to ensure the latest scenarios work, including targeting .NET Core 3.0.
- The deployment of .NET Core 3.0 Azure App Service is currently underway.
- The Azure Dev Ops deployment of .NET Core 3.0 is coming soon. Will update when available.
Platform support
.NET Core 3.0 is supported on the following operating systems:
- Alpine: 3.9+
- Debian: 9+
- openSUSE: 42.3+
- Fedora: 26+
- Ubuntu: 16.04+
- RHEL: 6+
- SLES: 12+
- macOS: 10.13+
- Windows Client: 7, 8.1, 10 (1607+)
- Windows Server: 2012 R2 SP1+
Note: Windows Forms and WPF apps work only on Windows.
Follows chip support:
- x64 on Windows, macOS, and Linux
- x86 on Windows
- ARM32 on Windows and Linux
- ARM64 on Linux (kernel 4.14+)
Note: Make sure that ARM64 .NET Core 3.0 distributions use Linux kernel version 4.14 or later. For example, Ubuntu 18.04 meets this requirement, but 16.04 does not.
WPF and Windows Forms
You can build WPF and Windows Forms apps with .NET Core 3, on Windows. We had a strong compatibility goal from the beginning of the project, to facilitate the migration of desktop applications from .NET Framework to .NET Core. We have received feedback from many developers who have already successfully ported their app to .NET Core 3.0 that the process is simple. To a large extent, we took WPF and Windows Forms as-is and made them work on .NET Core. The engineering project was very different from that, but it's a good way to think about the project.
The following image shows a Windows Forms app for .NET Core:
Visual Studio 2019 16.3 supports building WPF apps that target .NET Core. This includes new templates and an updated XAML design and XAML Hot Reload. The design is similar to the existing XAML designer (targeting the .NET Framework), however, you may notice some differences in the experience. The big technical difference is that the .NET Core designer uses a new surface process (wpfsurface.exe) to exclusively run runtime code intended for the .NET Core version. Previously, the .NET Framework WPF designer process (xdesproc.exe) was itself a .NET Framework WPF process that hosted the designer, and due to runtime incompatibility, it is not possible to have a .NET Framework WPF process (in this case, Visual Studio) loading two versions of .NET (.NET Framework and .NET Core) in the same process. This means that some aspects of the designer, such as designer extensions, may not work the same way. If you're writing design extensions, you'll want to read up XAML designer extensibility migration.
Microsoft made Windows Forms and WPF open source, in December 2018. It's been great to see the Windows Forms and WPF community and teams working together to improve UI frameworks. In the case of WPF, we started with a very small amount of code in the GitHub repository. At this point, almost all of WPF has been published to GitHub, and some other components will spread over time. Like other .NET Core projects, these new repositories are part of the .NET Foundation and licensed under the MIT License.
Native Windows interoperability
Windows offers a rich native API, in the form of flat C, COM, and WinRT APIs. We've supported P/Invoke since .NET Core 1.0, and added the ability to CoCreate COM API, enable WinRT API, and exposed managed code as COM components as part of the .NET Core 3.0 release. We've received a lot of requests for these features, so we know they'll get a lot of use out of them.
Late last year, we announced that we were able to automate Excel from .NET Core. It was a fun time. Under the covers, this demo uses COM interop features like NOPIA, object equivalence, and custom marshallers. Now you can try it yourself and other demos in the extension examples.
Managed C++ and WinRT interop has partial support with .NET Core 3.0 and will be included in .NET Core 3.1.
What's new in C# 8.0
Nullable reference types
C# 8.0 introduces nullable reference types and non-nullable reference types that allow you to make important property declarations for reference type variables:
A reference should not be null. When variables are not supposed to be null, the compiler applies rules that ensure that it is safe to understate these variables without first checking that it is not null.
A reference can be null. When variables can be null, the compiler applies different rules to ensure that a null reference has been successfully checked.
This new feature offers significant advantages over the handling of reference variables in previous versions of C# where it was not possible to determine the design intent from the variable declaration. By adding nullable reference types, you can declare your intent more clearly, and the compiler helps you do it correctly and discover bugs in your code.
Look This is how you get rid of null reference exceptions forever, Try out Nullable Reference Types and Nullable reference types to find out more.
Default interface member implementations
Today, once you publish an interface, you end up changing it: you can't add members to it without breaking all existing implementers of it.
With C# 8.0, you can provide a body for an interface member. As a result, if a class that implements the interface doesn't implement that member (perhaps because it wasn't yet present when they wrote the code), the calling code will get the default implementation instead.
In this example, the ConsoleLogger class does not need to implement the ILogger Log (Exception) overload, as it is declared with a default implementation. You can now add new members to existing public interfaces as long as you provide a default implementation for existing implementers to use.
Asynchronous flows
You can now foreach an asynchronous data stream using IAsyncEnumerable
The language allows you to wait for each task to consume their elements. On the production side, return items are produced to produce an asynchronous flow. It might sound a little complicated, but it's incredibly easy in practice.
The following example shows both the production and consumption of asynchronous streams. The foreach statement is asynchronous and itself uses throughput to produce an asynchronous stream for callers. This model, which uses throughput, is the recommended model for producing asynchronous streams.
Besides being able to wait foreach, you can also create asynchronous iterators, e.g. an iterator that returns an IAsyncEnumerable / IAsyncEnumerator that you can both wait for and return. For objects that need to be disposed of, you can use IAsyncDisposable, which implement various types of frameworks, such as Stream and Timer.
Index and range
We've created new syntax and types that you can use to describe indexers, access array elements, or any other type that exposes direct data access. This includes support for either a single value - the usual definition of an index - or two values, describing a range.
Index is a new type that describes an array index. You can create an index from an int that counts from the beginning or with a prefix operator ^ that counts from the end. You can see both cases in the following example:
The range is similar, consisting of two index values, one for the start and one for the end, and can be written with an x..y range expression. You can then index with a range to produce a portion of the underlying data, as demonstrated in the following example:
Using statements
Are you tired of using statements that require indenting your code? Not anymore! You can now write the following code, which attaches a usage statement to the scope of the current statement block and then deletes the object at the end of it.
Change expressions
Anyone who uses C# probably loves the idea of a switch statement, but not the syntax. C#8 introduces switch expressions, which allow the following:
terser syntax
returns a value because it is an expression
fully integrated with pattern matching
The switch keyword is "infix", meaning the keyword is between the tested value (which is or in the first example) and the list of cases, much like the lambdas expression.
The first examples use lambda syntax for methods, which integrates well with switch expressions but is not required.
There are two models at play in this example. or first coincides with the Dot type pattern and then with the property pattern inside {curly braces}. _ Describes the discard pattern, which is the same as the default for switch statements.
You can take it a step further and rely on tuple deconstruction and parameter location, as you can see in the following example:
In this example, you can see that you don't need to define an explicit variable or type for each of the cases. Instead, the compiler can match the tuple under test with the tuples defined for each of the cases.
All of these patterns allow you to write declarative code that captures your intent instead of procedural code that implements tests for it. The compiler becomes responsible for implementing that boring procedural code and is guaranteed to always do so correctly.
There will still be cases where switch statements will be a better choice than switch expressions and patterns can be used with either syntax style.
Introducing a fast JSON API
.NET Core 3.0 includes a new family of JSON APIs that enables read/write, random access scenarios with a document object model (DOM), and a serializer. You are probably familiar with using Json.NET. The new APIs are intended to satisfy many of the same scenarios, but with less memory and faster execution.
You can see the initial motivation and description of the plan in The Future of JSON in .NET Core 3.0. This includes James Netwon-King, the author of Json.NET, explaining why a new API was created, rather than extending Json.NET. In short, we wanted to create a new JSON API that took advantage of all the new performance features in .NET Core and delivered performance in line with that. It wasn't possible to do this in an existing code base like Json.NET while maintaining compatibility.
Let's take a look at the new API, layer by layer.
Presentation of the new SqlClient
SqlClient is the data provider you use to access Microsoft SQL Server and Azure SQL Database, via one of the popular .NET O/RMs, such as EF Core or Dapper, or directly using the ADO.NET APIs. It will now be released and updated as a Microsoft.Data.SqlClient NuGet package and supported for both .NET Framework and .NET Core applications. By using NuGet, it will be easier for the SQL team to provide updates to both .NET Framework and .NET Core users.
ARM and IoT support
We added support for ARM64 Linux this release, after adding support for ARM32 for Linux and Windows in .NET Core 2.1 and 2.2, respectively. While some IoT workloads leverage our existing x64 capabilities, many users have requested ARM support. It is now live and we are working with customers who are planning large deployments.
Many IoT deployments using .NET are edge devices and completely network-oriented. Other scenarios require direct access to the hardware. In this release, we've added the ability to use serial ports on Linux and leverage digital pins on devices like Raspberry Pi. The pins use a variety of protocols. We've added support for GPIO, PWM, I2C, and SPI, allowing you to read sensor data, interact with radios, write text and images to displays, and many other scenarios.
This feature is available as part of the following packages:
As part of support for GPIO (and friends), we took a look at what was already available. We found APIs for C# and also Python. In both cases, the APIs were wrappers over native libraries, which were often licensed under the GPL. We have not seen a path forward with this approach. Instead, we created a 100% C# solution to implement these protocols. This means that our APIs will work anywhere .NET Core is supported, that it can be debugged with a C# debugger (via sourcelink), and that it supports multiple underlying Linux drivers (sysfs, libgpiod, and card-specific). All code is licensed under MIT. We see this approach as a big improvement for .NET developers over what has existed.
See dotnet/iot to learn more. The best places to start are samples or devices. We created some experiments while adding GPIO support. One of them was confirming that we could control an Arduino from a Pi through a serial connection. It was surprisingly easy. We also spent a lot of time playing with LED matrices, as you can see in this RGB LED matrix example. We expect to share more of these experiments over time.
Updated the .NET Core runtime roll-forward policy
The .NET Core runtime, actually the runtime collector, now allows major version roll-forward as an opt-in policy. The runtime collector already enables roll-forward on patches and minor releases as a default policy. We decided to expose a broader set of policies, which we expected would be important for various scenarios, but it did not change the default roll-forward behavior.
There is a new property called RollForward, which accepts the following values:
LatestPatch: Move to the highest patch version. This disables the minor policy.
Minor: Go to the lowest major minor version if the required minor version is missing. If the requested minor version is present, the LatestPatch policy is used. This is the default policy.
Major: Switch to the lowest major version and the lowest minor version if the required major version is missing. If the required major version is present, the minor policy is used.
LatestMinor: Switch to the highest minor version, even if the requested minor version is present.
LatestMajor: Switches to the minor and minor major version, even if the largest request is present.
Disable: Do not scroll forward. Associate only with the specified version. This policy is not recommended for general use as it disables the ability to roll forward to the latest patches. It is recommended for testing only.
See Runtime binding behavior and dotnet/core-setup #5691 for more information.
Limitations of Docker and cgroups
Many developers are packaging and running their application with containers. A key scenario is to limit a container's resources such as CPU or memory. In 2017 we implemented support for memory limits. Unfortunately, we found that the implementation was not aggressive enough to reliably stay under configured limits, and that applications were still killed OOM when memory limits were set (specifically < MB). We fixed this issue with .NET Core 3.0. We highly recommend .NET Core Docker users upgrade to .NET Core 3.0 due to this improvement.
The Docker Resource Limits feature is based on cgroups, which is a feature of the Linux kernel. From a runtime perspective, we need to target cgroup primitives.
You can limit the available memory for a container with the docker run -m argument, as shown in the following example that creates an Alpine-based container with a memory limit of 4 MB (and then prints the memory limit):
We've also added changes to better support CPU limits (--cpus). This includes changing how the runtime rounds up or down for CPU decimal values. In the case where --cpus is set to a value close (quite) to a smaller integer (for example, 1.499999999), the execution time would previously round that value downwards (in this case, to 1). As a result, the runtime would benefit from less CPU than required, leading to CPU underutilization. By rounding the value, the runtime increases the pressure on the operating system's thread scheduler, but even in the worst-case scenario (--cpus = 1.000000001 - previously rounded down to 1, now rounded to 2), we did not observe any excessive usage of the CPU leading to performance degradation.
The next step was to ensure that the thread pool respected the CPU limits. Part of the thread pool algorithm is calculating CPU busy time, which is, in part, a function of available CPUs. By taking CPU limits into account when calculating CPU busy time, we avoid various threadpool heuristics from competing with each other: one trying to allocate more threads to increase CPU busy time, and the other trying to allocate fewer threads because adding more threads doesn't improve throughput.
Reduced GC heap size by default
As we worked to improve support for Docker memory limits, we were inspired to make more general GC policy updates to improve memory usage for a broader set of applications (even when not running in a container). The changes better align the generation 0 allocation budget with modern processor cache sizes and cache hierarchy.
Damian Edwards on our team noticed that ASP.NET benchmark memory usage was cut in half with no negative effect on other performance metrics. That's a stunning improvement! As he says, these are the new defaults, with no need to change his (or your) code (other than adopting .NET Core 3.0).
The memory savings we saw with ASP.NET benchmarks may or may not be representative of what you'll see with your application. We'd love to hear how these changes reduce memory usage for your application. Better support for many proc machines
Based on the Windows legacy of .NET, the GC was supposed to implement the Windows concept of processor groups to support machines with 64+ processors. This implementation was made in .NET Framework, 5-10 years ago. With .NET Core, we initially made the choice for PAL Linux to emulate the same concept, even though it doesn't exist in Linux. We have since abandoned this concept in GC and moved it exclusively to Windows PAL.
The GC now exposes a configuration switch, GCHeapAffinitizeRanges, for specifying affinity masks on machines with 64+ processors. Maoni Stephens wrote about this change in improving the CPU configuration for GC on machines with >64 CPUs.
GC Support for large pages
Large Pages or Huge Pages is a feature where the operating system is able to establish areas of memory larger than the native page size (often 4K) to improve the performance of the application that requires these large pages.
When a virtual-to-physical address translation occurs, a cache called the Translation lookaside buffer (TLB) is first consulted (often in parallel) to see if a physical translation is available for the virtual address being accessed, to avoid doing a potentially expensive table-to-page walk. Each large page translation uses a single translation buffer within the CPU. The size of this buffer is typically three orders of magnitude larger than the native page size; this increases the efficiency of the translation buffer, which can increase the performance of frequently accessed memory. This victory can be even more significant in a virtual machine, which has a two-level TLB.
The GC can now be configured with the GCLargePages opt-in feature to choose to allocate large pages on Windows. Using large pages reduces TLB misses, so can potentially increase the perf of the application overall, however, the feature has its own set of limitations that should be considered. Bing has been experimenting with this feature and seeing performance improvements.
.NET Core Version APIs
We have improved the .NET Core version APIs in .NET Core 3.0. They now return the version information you would expect. These changes, while objectively better, are technically breaking and could break applications that rely on existing version APIs for various information.
You can now access the following version information:
Event pipe improvements
Event Pipe now supports multiple sessions. This means you can consume events with EventListener in-proc and simultaneously have event pipe clients out-of-process.
New perf tokens added:
- % Time in GC
- Generation 0 heap size
- Generation 1 heap size
- Generation 2 heap size
- LOH Heap size
- Allocation rate
- Number of assemblies loaded
- ThreadPool number of threads
- Monitor lock contention rate
- ThreadPool work item queue
- ThreadPool Job Objects Completed Rate
Profiler linking is now implemented using the same infrastructure as Event Pipe.
See Playing with Counters by David Fowler to get an idea of what you can do with the event pipe to perform your own performance investigations or simply monitor the state of your application.
See dotnet-counter to install the dotnet-counter tool.
HTTP/2 support
We now have support for HTTP/2 in HttpClient. The new protocol is a requirement for some APIs, such as gRPC and Apple Push Notification Service. We expect more services to require HTTP/2 in the future. ASP.NET also has support for HTTP/2.
Note: The preferred HTTP protocol version will be negotiated via TLS/ALPN, and HTTP/2 will only be used if the server chooses to use it.
Multi-level compilation
Tiered compilation was added as an enable feature in .NET Core 2.1. It is a feature that allows the runtime to more adaptively use the Just-In-Time (JIT) compiler to achieve better performance, both at startup and to maximize throughput. It is enabled by default with .NET Core 3.0. We've made many improvements to the feature over the past year, including testing with a variety of workloads, including websites, PowerShell Core, and Windows desktop apps. Performance is much better, which is what allowed us to enable it by default.
IEEE floating point improvements
The floating point APIs have been updated to comply with IEEE revision 754-2008. The goal of the .NET Core floating point project is to expose all "required" operations and ensure that they are behaviorally compliant with IEEE specifications.
Parsing and formatting fixes:
- Correctly parse and round inputs of any length.
- Parse and format negative zero correctly.
- Correctly parse Infinity and NaN by performing a case-insensitive check and allowing an optional preceding + where applicable.
New math APIs:
BitIncrement / BitDecrement: corresponds to the IEEE nextUp and nextDown operations. They return the smallest floating point number that compares greater or less than the input (respectively). For example, Math.BitIncrement (0.0) returns double.Epsilon.
MaxMagnitude / MinMagnitude - corresponds to the IEEE operations maxNumMag and minNumMag, they return the greater or lesser value in magnitude of the two inputs (respectively). For example, Math.MaxMagnitude(2.0, -3.0) returns -3.0.
ILogB - corresponds to the IEEE logB operation that returns an integral value, returns the base-2 integral register of the input parameter. This is effectively the same as floor(log2(x)), but done with minimal rounding error.
Scale B: Corresponds to the IEEE operation of scale B that takes an integral value, actually returns x * pow (2, n), but is performed with minimal rounding error.
Log2: Corresponds to the IEEE log2 operation, returns the base-2 logarithm. Minimizes rounding error.
FusedMultiplyAdd: Corresponds to the IEEE fma operation, performs a fused multiply addition. That is, it does (x * y) + z as a single operation, minimizing rounding error. An example would be FusedMultiplyAdd(1e308, 2.0, -1e308) which returns 1e308. The normal (1e308 * 2.0) - 1e308 returns double.PositiveInfinity.
CopySign: Corresponds to the IEEE copySign operation, returns the value of x, but with the sign of y.
Intrinsics dependent on the .NET platform
We've added APIs that allow access to certain performance-oriented CPU instructions, such as SIMD or bit manipulation instruction sets. These instructions can help achieve large performance improvements in certain scenarios, such as efficiently processing data in parallel. In addition to exposing APIs for your programs to use, we've started using these instructions to accelerate .NET libraries as well.
The following CoreCLR PRs demonstrate some intrinsic aspects, both through implementation and use:
Implement simple SSE2 hardware intrinsics
Implement SSE hardware intrinsics
Arm64 Base HW Intrinsics
Use TZCNT and LZCNT to Locate {First | Last} Found {Byte | Char}
For more information, take a look at .NET Platform Dependent Intrinsics, which defines an approach for defining this hardware infrastructure, allowing Microsoft, chip vendors, or any other company or individual to define hardware/chip APIs that should be exposed to .NET code.
Supporting TLS 1.3 and OpenSSL 1.1.1 now Supported on Linux
NET Core can now take advantage of TLS 1.3 support in OpenSSL 1.1.1. The advantages of TLS 1.3 are many, for the OpenSSL team:
Improved connection times by reducing the number of round trips required between client and server
Improved security by removing various outdated and insecure cryptographic algorithms and encrypting multiple connection handshakes
.NET Core 3.0 is capable of using OpenSSL 1.1.1, OpenSSL 1.1.0, or OpenSSL 1.0.2 (whichever version is best found, on a Linux system). When OpenSSL 1.1.1 is available, the SslStream and HttpClient types will use TLS 1.3 when using SslProtocols.None (system default protocols), assuming the client and server support TLS 1.3.
.NET Core will support TLS 1.3 on Windows and macOS - we expect automatically - when support becomes available.
Cryptography
We have added support for AES-GCM and AES-CCM ciphers, implemented via System.Security.Cryptography.AesGcm and System.Security.Cryptography.AesCcm. These algorithms are both Authenticated Encryption with Association Data (AEAD) algorithms and the first Authenticated Encryption (AE) algorithms added to .NET Core.
NET Core 3.0 now supports importing and exporting asymmetric public and private keys from standard formats, without the need to use an X.509 certificate.
All key types (RSA, DSA, ECDsa, ECDiffieHellman) support the X.509 SubjectPublicKeyInfo format for public keys and the PKCS#8 PrivateKeyInfo and PKCS#8 EncryptedPrivateKeyInfo formats for private keys. RSA also supports PKCS #1 RSAPublicKey and PKCS #1 RSAPrivateKey. All export methods produce DER-encoded binary data, and import methods expect the same; If a key is stored in PEM text format, the caller will have to base64 decode the contents before calling an import method.
PKCS#8 files can be controlled with the System.Security.Cryptography.Pkcs.Pkcs8PrivateKeyInfo class.
PFX/PKCS#12 files can be inspected and manipulated with System.Security.Cryptography.Pkcs.Pkcs12Info and System.Security.Cryptography.Pkcs.Pkcs12Builder, respectively.
New Japanese Era (Reiwa)
On May 1, 2019, Japan began a new era called Reiwa. Software that supports Japanese calendars, such as .NET Core, must be updated to accommodate Reiwa. .NET Core and .NET Framework have been updated and correctly handle the formatting and parsing of Japanese dates with the new era.
.NET relies on the operating system or other updates to correctly process Reiwa dates. If you or your customers use Windows, download the latest updates for your version of Windows. If you're running macOS or Linux, download and install ICU version 64.2, which supports the new Japanese era.
Managing a New Era in the Japanese Calendar on the .NET blog has more information about .NET support for the Japanese New Era.
Improvements to the assembly load context
Improvements to AssemblyLoadContext:
Enable naming contexts
Added ability to enumerate ALCs
Added ability to enumerate assemblies within an ALC
Made the type concrete - so instantiation is simpler (no requirement for custom types for simple scenarios)
See dotnet/corefx #34791 for more details. The appwithalc example demonstrates these new features.
By using AssemblyDependencyResolver together with a custom AssemblyLoadContext, an application can load plug-ins so that each plug-in's dependencies are loaded from the correct location and one plug-in's dependencies do not conflict with another. The AppWithPlugin sample includes plug-ins with conflicting dependencies and plug-ins that rely on satellite assemblies or native libraries.
Assembly downloadability
Assembly downloadability is a new feature of AssemblyLoadContext. This new functionality is largely transparent from an API perspective, exposed with only a few new APIs. Allows you to unload a loader context, freeing up all memory for instantiated types, static fields, and the assembly itself. An application should be able to load and unload assemblies via this mechanism forever without experiencing memory leaks.
We expect this new feature to be used for the following scenarios:
Plugin scenarios where dynamic plug-in loading and unloading is required.
Compile, run, and then download the code dynamically. Useful for websites, scripting engines, etc.
Loading assemblies for introspection (like ReflectionOnlyLoad), although MetadataLoadContext will be a better choice in many cases.
Assembly Metadata Reading with MetadataLoadContext
We added MetadataLoadContext, which allows the assembly metadata to be read without affecting the caller's application domain. Assemblies are read as data, including assemblies created for different architectures and platforms than the current runtime environment. MetadataLoadContext overlaps with the ReflectionOnlyLoad type, which is available only in the .NET Framework.
MetdataLoadContext is available in the System.Reflection.MetadataLoadContext package. It is a .NET Standard 2.0 package.
Scenarios for MetadataLoadContext include design-time functions, build-time tools, and run-time light functions that need to inspect a set of assemblies as data and free all file blocks and memory after the inspection is performed.
Native hosting example
The team has released a native hosting sample. Demonstrates a best practice approach for hosting .NET Core in a native application.
As part of .NET Core 3.0, we now expose general functionality to native .NET Core hosts that were previously only available to .NET Core managed applications via officially provisioned .NET Core hosts. The functionality is mainly related to assembly loading. This feature should make it easier to produce native hosts that can take advantage of the full feature set of .NET Core.
Other API improvements
We've optimized Span>T<, Memory >T<, and related types introduced in .NET Core 2.1. Common operations like span construction, slicing, parsing, and formatting now work better. Additionally, types like String have seen undercover improvements to make them more efficient when used as keys with Dictionary
The following improvements are also new:
- Brotli support built-in to HttpClient
- ThreadPool.UnsafeQueueWorkItem(IThreadPoolWorkItem)
- Unsafe.Unbox
- CancellationToken.Unregister
- Complex arithmetic operators
- Socket APIs for TCP keep alive
- StringBuilder.GetChunks
- IPEndPoint parsing
- RandomNumberGenerator.GetInt32
- System.Buffers.SequenceReader
Applications now have native executables by default
.NET Core applications are now built with native executables. This is new for framework dependent application. Until now, only standalone applications had executables.
You can expect the same things with these executables as you would with other native executables, such as:
You can double-click the executable file to launch the application.
You can launch the application from a command prompt, using myapp.exe, on Windows, and ./myapp, on Linux and macOS.
The executable generated as part of the build will match your operating system and CPU. For example, if you are on an x64 Linux machine, the executable will only work on that type of machine, not on a Windows machine and not on an ARM Linux machine. This is because executables are native code (just like C++). If you want to target another type of machine, you need to publish with a runtime argument. You can continue to launch applications with the dotnet command and not use native executables if you prefer.
Optimize your .NET Core apps with ReadyToRun images
You can improve the startup times of your .NET Core application by compiling your application assemblies in ReadyToRun (R2R) format. R2R is a form of early filing (AOT). It is an opt-in feature at publish time in .NET Core 3.0.
R2R binaries improve startup performance by reducing the amount of work JIT must do during application loading. The binaries contain native code similar to what the JIT would produce, giving the JIT some time off when performance matters most (at startup). R2R binaries are larger because they contain both the intermediate language (IL) code, which is still needed for some scenarios, and the native version of the same code, to improve startup.
To enable ReadyToRun compilation:
- Set the PublishReadyToRun property to true.
- Publish using an explicit RuntimeIdentifier.
Note: When application assemblies are compiled, the native code produced is platform- and architecture-specific (which is why you must specify a valid RuntimeIdentifier when publishing).
here is an example:
And publish using the following command:
Note: RuntimeIdentifier can be set on another operating system or chip. It can also be set in the project file.
Assembly connection
The .NET core 3.0 SDK comes with a tool that can reduce the size of apps by parsing ILs and trimming unused assemblies. It's another opt-in on publish feature in .NET Core 3.0.
With .NET Core, it's always been possible to publish self-contained apps that include everything you need to run your code, without requiring .NET to be installed on the deployment target. In some cases, the app requires only a small subset of the framework to function and could potentially be much smaller by including only the libraries used.
We use the IL linker to scan your application's IL to detect what code is actually required and then trim unused framework libraries. This can significantly reduce the size of some apps. Typically, small tool-like console apps benefit the most as they tend to use fairly small subsets of the framework and are generally more susceptible to trimming.
To use the linker:
Set the PublishTrimmed property to true.
Publish using an explicit RuntimeIdentifier.
Here is an example:
And publish using the following command:
Note: RuntimeIdentifier can be set on another operating system or chip. It can also be set in the project file.
See Single File Bundler for more information.
The assembly trimmer, early compilation (via crossgen), and individual file grouping are all new features in .NET Core 3.0 that can be used together or separately.
We expect that some of you will prefer the single exe provided by an early compiler, over the self-extracting-executable approach we are providing in .NET Core 3.0. The forward compiler approach will ship as part of the .NET 5 release.
dotnet build now copies dependencies
dotnet build now copies the NuGet dependencies for your application from the NuGet cache to the build output folder during the build operation. Until this release, such dependencies were only copied as part of dotnet publishing. This change allows you to copy the build output to different machines.
There are some operations, such as linking and publishing razor pages that require publishing.
.NET Core Tools - local installation
The .NET Core tools have been updated to allow local installation. They have advantages over global tools, which were added in .NET Core 2.1.
Local installation allows the following:
- Limit the scope of use of a tool.
- Always use a specific version of the tool, which may differ from a globally installed tool or another local installation. This is based on the version in the local tools manifest file.
- Launched with dotnet, as in dotnet mytool.
Note: See the Local Tools Early Preview documentation for more information.
.NET Core SDK installers will now upgrade in place
MSI .NET Core SDK installers for Windows will begin updating current patch versions. This will reduce the number of SDKs installed on developer and production machines.
The update policy specifically targets the .NET Core SDK feature bands. Feature bands are defined in hundreds of groups in the patch section of the version number. For example, 3.0.101 and 3.0.201 are versions in two different feature bands while 3.0.101 and 3.0.199 are in the same feature set.
This means that when .NET Core SDK 3.0.101 becomes available and is installed, .NET Core SDK 3.0.100 will be removed from your machine if it exists. When .NET Core SDK 3.0.200 becomes available and is installed on the same computer, .NET Core SDK 3.0.101 will not be removed. In that situation, .NET Core SDK 3.0.200 will still be used by default, but .NET Core SDK 3.0.101 (or later .1xx) will still be usable if it is configured for use via global.json.
This approach is in line with the behavior of global.json, which allows roll forward between patch versions, but not SDK feature bands. Therefore, updating through the SDK installer does not result in errors due to a missing SDK. The feature bands also align with side-by-side Visual Studio installations for those users who install SDKs for using Visual Studio.
For more information, see:
.NET Core SDK sizing improvements
The .NET Core SDK is significantly smaller than .NET Core 3.0. The main reason is that we have changed the way we build the SDK, moving to purpose-built "packages" of various types (reference assemblies, frameworks, templates). In previous releases (including .NET Core 2.2), we built the SDK from NuGet packages, which included many artifacts that were not required and wasted a lot of space.
.NET Core 3.0 SDK Size (size change in parentheses)
OS Installer Size (change) Disk Size (change)
Windows 164 MB (-440 KB; 0%) 441 MB (-968 MB; -68.7%)
Linux 115 MB (-55 MB; -32%) 332 MB (-1068 MB; -76.2%)
macOS 118 MB (-51 MB; -30%) 337 MB (-1063 MB; -75.9%)
The size improvements for Linux and macOS are notable. The improvement for Windows is smaller because we added WPF and Windows Forms as part of .NET Core 3.0. It's amazing that we added WPF and Windows Forms in 3.0 and that the installer is still (a little) smaller.
You can see the same benefit with the .NET Core SDK Docker images (here, limited to Debian and Alpine x64).
Distro 2.2 Size 3.0 Size
Debian 1.74GB 706MB
Alpine 1.48GB 422MB
You can see how we calculated these file sizes in .NET Core 3.0 SDK Size Improvements. Step-by-step instructions are provided so you can run the same tests in your own environment.
Update on Docker publishing
Microsoft teams are now publishing container images to Microsoft
Container Registry (MCR). There are two main reasons for this change:
- Share Microsoft-provided container images across multiple registries, such as Docker Hub and Red Hat.
- Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images.
On the .NET team we are now publishing all .NET Core images to MCR. As you can see from the links (if you click on it), we continue to have "home pages" on Docker Hub. We intend for this to continue indefinitely. MCR does not offer such pages, but relies on public registries, such as Docker Hub, to provide users with image-related information.
Links to our old repositories, such as microsoft/dotnet and microsoft/dotnet-nightly, are now pushed to the new locations. Existing images in those locations still exist and will not be deleted.
We will continue to repair floating tags in old repositories for the supported lifetime of various .NET Core releases. For example, 2.1-sdk, 2.2-runtime and the latter are examples of mobile tags that will be served. A three-part version tag like 2.1.2-sdk will not be revised, as it already was. We will only support .NET Core 3.0 images in MCR.
For example, the correct tag string to extract the SDK 3.0 image now looks like this:
The new MCR string will be used with both Docker pull and Dockerfile FROM instructions.
See .NET Core Images Now Available via Microsoft Container Registry for more information.
The SDK Docker images contain PowerShell Core
PowerShell Core has been added to the .NET Core Docker SDK container images, per community requests. PowerShell Core is a cross-platform (Windows, Linux, and macOS) automation and configuration tool/framework that works well with existing tools and is optimized to handle structured data (e.g. JSON, CSV, XML, etc.), REST APIs, and object models. It includes a command-line shell, an associated scripting language, and a framework for processing cmdlets.
You can try PowerShell Core, as part of the .NET Core SDK container image, by running the following Docker command:
There are two main scenarios that allow you to have PowerShell in your .NET Core SDK container image, which otherwise would not have been possible:
Write .NET Core application Dockerfile with PowerShell syntax for any operating system.
Write a .NET Core application/library build logic that can be easily containerized.
Example syntax for starting PowerShell for a containerized (volume-mounted) build:
docker run -it -v c:\myrepo:/myrepo -w/myrepo mcr.microsoft.com/dotnet/core/sdk:3.0 pwsh build.ps1
docker run -it -v c:\myrepo:/myrepo -w/myrepo mcr.microsoft.com/dotnet/core/sdk:3.0./build.ps1
For the second example to work, on Linux, the .ps1 file must have the following pattern and must be formatted with Unix (LF) and Windows (CRLF) line endings:
If you're new to PowerShell and want to learn more, we recommend checking out the introductory documentation.
Note: PowerShell Core is now available as part of the .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK.
Support for Red Hat
In April 2015, we announced that .NET Core was coming to Red Hat Enterprise Linux. Through an excellent engineering partnership with Red Hat, .NET Core 1.0 appeared as an available component in the Red Hat Software Collections, June 2016. Working with Red Hat engineers, we have learned (and continue to learn!) a lot about releasing software for the Linux community.
Over the past four years, Red Hat has shipped many .NET Core updates and significant releases, such as 2.1 and 2.2, on the same day as Microsoft. With .NET Core 2.2, Red Hat has expanded its .NET Core offerings to include OpenShift platforms. With the release of RHEL 8, we are excited to have .NET Core 2.1, and soon, 3.0, available in Red Hat Application Streams.
Closing
.NET Core 3.0 is a major new release of .NET Core and includes a broad set of improvements. We recommend that you start adopting .NET Core 3.0 as soon as possible. It greatly improves .NET Core in many ways, such as greatly reducing the size of the SDK and greatly improving support for key scenarios like containers and Windows desktop applications. There are also many small improvements that were not included in this post, which you will surely benefit from over time.
Please share your feedback with us, in the coming days, weeks or months. We hope you like it. We had a lot of fun doing this for you.
If you still want to read more, we recommend reading the following recent posts: