Events
Mar 31, 11 PM - Apr 2, 11 PM
The ultimate Microsoft Fabric, Power BI, SQL, and AI community-led event. March 31 to April 2, 2025.
Register todayThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Note
This isn't the latest version of this article. For the current release, see the .NET 9 version of this article.
Warning
This version of ASP.NET Core is no longer supported. For more information, see the .NET and .NET Core Support Policy. For the current release, see the .NET 9 version of this article.
Important
This information relates to a pre-release product that may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
For the current release, see the .NET 9 version of this article.
By Mike Rousos
This article provides guidelines for maximizing performance and reliability of ASP.NET Core apps.
Caching is discussed in several parts of this article. For more information, see Overview of caching in ASP.NET Core.
In this article, a hot code path is defined as a code path that is frequently called and where much of the execution time occurs. Hot code paths typically limit app scale-out and performance and are discussed in several parts of this article.
ASP.NET Core apps should be designed to process many requests simultaneously. Asynchronous APIs allow a small pool of threads to handle thousands of concurrent requests by not waiting on blocking calls. Rather than waiting on a long-running synchronous task to complete, the thread can work on another request.
A common performance problem in ASP.NET Core apps is blocking calls that could be asynchronous. Many synchronous blocking calls lead to Thread Pool starvation and degraded response times.
Do not block asynchronous execution by calling Task.Wait or Task<TResult>.Result.
Do not acquire locks in common code paths. ASP.NET Core apps perform best when architected to run code in parallel.
Do not call Task.Run and immediately await it. ASP.NET Core already runs app code on normal Thread Pool threads, so calling Task.Run
only results in extra unnecessary Thread Pool scheduling. Even if the scheduled code would block a thread, Task.Run
does not prevent that.
A profiler, such as PerfView, can be used to find threads frequently added to the Thread Pool. The Microsoft-Windows-DotNETRuntime/ThreadPoolWorkerThread/Start
event indicates a thread added to the thread pool.
A webpage shouldn't load large amounts of data all at once. When returning a collection of objects, consider whether it could lead to performance issues. Determine if the design could produce the following poor outcomes:
Do add pagination to mitigate the preceding scenarios. Using page size and page index parameters, developers should favor the design of returning a partial result. When an exhaustive result is required, pagination should be used to asynchronously populate batches of results to avoid locking server resources.
For more information on paging and limiting the number of returned records, see:
Returning IEnumerable<T>
from an action results in synchronous collection iteration by the serializer. The result is the blocking of calls and a potential for thread pool starvation. To avoid synchronous enumeration, use ToListAsync
before returning the enumerable.
Beginning with ASP.NET Core 3.0, IAsyncEnumerable<T>
can be used as an alternative to IEnumerable<T>
that enumerates asynchronously. For more information, see Controller action return types.
The .NET Core garbage collector manages allocation and release of memory automatically in ASP.NET Core apps. Automatic garbage collection generally means that developers don't need to worry about how or when memory is freed. However, cleaning up unreferenced objects takes CPU time, so developers should minimize allocating objects in hot code paths. Garbage collection is especially expensive on large objects (>= 85,000 bytes). Large objects are stored on the large object heap and require a full (generation 2) garbage collection to clean up. Unlike generation 0 and generation 1 collections, a generation 2 collection requires a temporary suspension of app execution. Frequent allocation and de-allocation of large objects can cause inconsistent performance.
Recommendations:
Memory issues, such as the preceding, can be diagnosed by reviewing garbage collection (GC) stats in PerfView and examining:
For more information, see Garbage Collection and Performance.
Interactions with a data store and other remote services are often the slowest parts of an ASP.NET Core app. Reading and writing data efficiently is critical for good performance.
Recommendations:
.Where
, .Select
, or .Sum
statements, for example) so that the filtering is performed by the database.The following approaches may improve performance in high-scale apps:
We recommend measuring the impact of the preceding high-performance approaches before committing the code base. The additional complexity of compiled queries may not justify the performance improvement.
Query issues can be detected by reviewing the time spent accessing data with Application Insights or with profiling tools. Most databases also make statistics available concerning frequently executed queries.
Although HttpClient implements the IDisposable
interface, it's designed for reuse. Closed HttpClient
instances leave sockets open in the TIME_WAIT
state for a short period of time. If a code path that creates and disposes of HttpClient
objects is frequently used, the app may exhaust available sockets. HttpClientFactory
was introduced in ASP.NET Core 2.1 as a solution to this problem. It handles pooling HTTP connections to optimize performance and reliability. For more information, see Use HttpClientFactory
to implement resilient HTTP requests.
Recommendations:
HttpClient
instances directly.HttpClient
instances. For more information, see Use HttpClientFactory to implement resilient HTTP requests.You want all of your code to be fast. Frequently-called code paths are the most critical to optimize. These include:
Recommendations:
Most requests to an ASP.NET Core app can be handled by a controller or page model calling necessary services and returning an HTTP response. For some requests that involve long-running tasks, it's better to make the entire request-response process asynchronous.
Recommendations:
ASP.NET Core apps with complex front-ends frequently serve many JavaScript, CSS, or image files. Performance of initial load requests can be improved by:
Recommendations:
environment
tag to handle both Development
and Production
environments.Reducing the size of the response usually increases the responsiveness of an app, often dramatically. One way to reduce payload sizes is to compress an app's responses. For more information, see Response compression.
Each new release of ASP.NET Core includes performance improvements. Optimizations in .NET Core and ASP.NET Core mean that newer versions generally outperform older versions. For example, .NET Core 2.1 added support for compiled regular expressions and benefitted from Span<T>. ASP.NET Core 2.2 added support for HTTP/2. ASP.NET Core 3.0 adds many improvements that reduce memory usage and improve throughput. If performance is a priority, consider upgrading to the current version of ASP.NET Core.
Exceptions should be rare. Throwing and catching exceptions is slow relative to other code flow patterns. Because of this, exceptions shouldn't be used to control normal program flow.
Recommendations:
App diagnostic tools, such as Application Insights, can help to identify common exceptions in an app that may affect performance.
All I/O in ASP.NET Core is asynchronous. Servers implement the Stream
interface, which has both synchronous and asynchronous overloads. The asynchronous ones should be preferred to avoid blocking thread pool threads. Blocking threads can lead to thread pool starvation.
Do not do this: The following example uses the ReadToEnd. It blocks the current thread to wait for the result. This is an example of sync over async.
public class BadStreamReaderController : Controller
{
[HttpGet("/contoso")]
public ActionResult<ContosoData> Get()
{
var json = new StreamReader(Request.Body).ReadToEnd();
return JsonSerializer.Deserialize<ContosoData>(json);
}
}
In the preceding code, Get
synchronously reads the entire HTTP request body into memory. If the client is slowly uploading, the app is doing sync over async. The app does sync over async because Kestrel does NOT support synchronous reads.
Do this: The following example uses ReadToEndAsync and does not block the thread while reading.
public class GoodStreamReaderController : Controller
{
[HttpGet("/contoso")]
public async Task<ActionResult<ContosoData>> Get()
{
var json = await new StreamReader(Request.Body).ReadToEndAsync();
return JsonSerializer.Deserialize<ContosoData>(json);
}
}
The preceding code asynchronously reads the entire HTTP request body into memory.
Warning
If the request is large, reading the entire HTTP request body into memory could lead to an out of memory (OOM) condition. OOM can result in a Denial Of Service. For more information, see Avoid reading large request bodies or response bodies into memory in this article.
Do this: The following example is fully asynchronous using a non-buffered request body:
public class GoodStreamReaderController : Controller
{
[HttpGet("/contoso")]
public async Task<ActionResult<ContosoData>> Get()
{
return await JsonSerializer.DeserializeAsync<ContosoData>(Request.Body);
}
}
The preceding code asynchronously de-serializes the request body into a C# object.
Use HttpContext.Request.ReadFormAsync
instead of HttpContext.Request.Form
.
HttpContext.Request.Form
can be safely read only with the following conditions:
ReadFormAsync
, andHttpContext.Request.Form
Do not do this: The following example uses HttpContext.Request.Form
. HttpContext.Request.Form
uses sync over async and can lead to thread pool starvation.
public class BadReadController : Controller
{
[HttpPost("/form-body")]
public IActionResult Post()
{
var form = HttpContext.Request.Form;
Process(form["id"], form["name"]);
return Accepted();
}
Do this: The following example uses HttpContext.Request.ReadFormAsync
to read the form body asynchronously.
public class GoodReadController : Controller
{
[HttpPost("/form-body")]
public async Task<IActionResult> Post()
{
var form = await HttpContext.Request.ReadFormAsync();
Process(form["id"], form["name"]);
return Accepted();
}
In .NET, every object allocation greater than or equal to 85,000 bytes ends up in the large object heap (LOH). Large objects are expensive in two ways:
This blog post describes the problem succinctly:
When a large object is allocated, it's marked as Gen 2 object. Not Gen 0 as for small objects. The consequences are that if you run out of memory in LOH, GC cleans up the whole managed heap, not only LOH. So it cleans up Gen 0, Gen 1 and Gen 2 including LOH. This is called full garbage collection and is the most time-consuming garbage collection. For many applications, it can be acceptable. But definitely not for high-performance web servers, where few big memory buffers are needed to handle an average web request (read from a socket, decompress, decode JSON, and more).
Storing a large request or response body into a single byte[]
or string
:
When using a serializer/de-serializer that only supports synchronous reads and writes (for example, Json.NET):
Warning
If the request is large, it could lead to an out of memory (OOM) condition. OOM can result in a Denial Of Service. For more information, see Avoid reading large request bodies or response bodies into memory in this article.
ASP.NET Core 3.0 uses System.Text.Json by default for JSON serialization. System.Text.Json:
Newtonsoft.Json
.The IHttpContextAccessor.HttpContext returns the HttpContext
of the active request when accessed from the request thread. The IHttpContextAccessor.HttpContext
should not be stored in a field or variable.
Do not do this: The following example stores the HttpContext
in a field and then attempts to use it later.
public class MyBadType
{
private readonly HttpContext _context;
public MyBadType(IHttpContextAccessor accessor)
{
_context = accessor.HttpContext;
}
public void CheckAdmin()
{
if (!_context.User.IsInRole("admin"))
{
throw new UnauthorizedAccessException("The current user isn't an admin");
}
}
}
The preceding code frequently captures a null or incorrect HttpContext
in the constructor.
Do this: The following example:
HttpContext
field at the correct time and checks for null
.public class MyGoodType
{
private readonly IHttpContextAccessor _accessor;
public MyGoodType(IHttpContextAccessor accessor)
{
_accessor = accessor;
}
public void CheckAdmin()
{
var context = _accessor.HttpContext;
if (context != null && !context.User.IsInRole("admin"))
{
throw new UnauthorizedAccessException("The current user isn't an admin");
}
}
}
HttpContext
is not thread-safe. Accessing HttpContext
from multiple threads in parallel can result in unexpected behavior such as the server to stop responding, crashes, and data corruption.
Do not do this: The following example makes three parallel requests and logs the incoming request path before and after the outgoing HTTP request. The request path is accessed from multiple threads, potentially in parallel.
public class AsyncBadSearchController : Controller
{
[HttpGet("/search")]
public async Task<SearchResults> Get(string query)
{
var query1 = SearchAsync(SearchEngine.Google, query);
var query2 = SearchAsync(SearchEngine.Bing, query);
var query3 = SearchAsync(SearchEngine.DuckDuckGo, query);
await Task.WhenAll(query1, query2, query3);
var results1 = await query1;
var results2 = await query2;
var results3 = await query3;
return SearchResults.Combine(results1, results2, results3);
}
private async Task<SearchResults> SearchAsync(SearchEngine engine, string query)
{
var searchResults = _searchService.Empty();
try
{
_logger.LogInformation("Starting search query from {path}.",
HttpContext.Request.Path);
searchResults = _searchService.Search(engine, query);
_logger.LogInformation("Finishing search query from {path}.",
HttpContext.Request.Path);
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed query from {path}",
HttpContext.Request.Path);
}
return await searchResults;
}
Do this: The following example copies all data from the incoming request before making the three parallel requests.
public class AsyncGoodSearchController : Controller
{
[HttpGet("/search")]
public async Task<SearchResults> Get(string query)
{
string path = HttpContext.Request.Path;
var query1 = SearchAsync(SearchEngine.Google, query,
path);
var query2 = SearchAsync(SearchEngine.Bing, query, path);
var query3 = SearchAsync(SearchEngine.DuckDuckGo, query, path);
await Task.WhenAll(query1, query2, query3);
var results1 = await query1;
var results2 = await query2;
var results3 = await query3;
return SearchResults.Combine(results1, results2, results3);
}
private async Task<SearchResults> SearchAsync(SearchEngine engine, string query,
string path)
{
var searchResults = _searchService.Empty();
try
{
_logger.LogInformation("Starting search query from {path}.",
path);
searchResults = await _searchService.SearchAsync(engine, query);
_logger.LogInformation("Finishing search query from {path}.", path);
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed query from {path}", path);
}
return await searchResults;
}
HttpContext
is only valid as long as there is an active HTTP request in the ASP.NET Core pipeline. The entire ASP.NET Core pipeline is an asynchronous chain of delegates that executes every request. When the Task
returned from this chain completes, the HttpContext
is recycled.
Do not do this: The following example uses async void
which makes the HTTP request complete when the first await
is reached:
async void
is ALWAYS a bad practice in ASP.NET Core apps.HttpResponse
after the HTTP request is complete.public class AsyncBadVoidController : Controller
{
[HttpGet("/async")]
public async void Get()
{
await Task.Delay(1000);
// The following line will crash the process because of writing after the
// response has completed on a background thread. Notice async void Get()
await Response.WriteAsync("Hello World");
}
}
Do this: The following example returns a Task
to the framework, so the HTTP request doesn't complete until the action completes.
public class AsyncGoodTaskController : Controller
{
[HttpGet("/async")]
public async Task Get()
{
await Task.Delay(1000);
await Response.WriteAsync("Hello World");
}
}
Do not do this: The following example shows a closure is capturing the HttpContext
from the Controller
property. This is a bad practice because the work item could:
HttpContext
.[HttpGet("/fire-and-forget-1")]
public IActionResult BadFireAndForget()
{
_ = Task.Run(async () =>
{
await Task.Delay(1000);
var path = HttpContext.Request.Path;
Log(path);
});
return Accepted();
}
Do this: The following example:
[HttpGet("/fire-and-forget-3")]
public IActionResult GoodFireAndForget()
{
string path = HttpContext.Request.Path;
_ = Task.Run(async () =>
{
await Task.Delay(1000);
Log(path);
});
return Accepted();
}
Background tasks should be implemented as hosted services. For more information, see Background tasks with hosted services.
Do not do this: The following example shows a closure that is capturing the DbContext
from the Controller
action parameter. This is a bad practice. The work item could run outside of the request scope. The ContosoDbContext
is scoped to the request, resulting in an ObjectDisposedException
.
[HttpGet("/fire-and-forget-1")]
public IActionResult FireAndForget1([FromServices]ContosoDbContext context)
{
_ = Task.Run(async () =>
{
await Task.Delay(1000);
context.Contoso.Add(new Contoso());
await context.SaveChangesAsync();
});
return Accepted();
}
Do this: The following example:
IServiceScopeFactory
is a singleton.ContosoDbContext
from the incoming request.[HttpGet("/fire-and-forget-3")]
public IActionResult FireAndForget3([FromServices]IServiceScopeFactory
serviceScopeFactory)
{
_ = Task.Run(async () =>
{
await Task.Delay(1000);
await using (var scope = serviceScopeFactory.CreateAsyncScope())
{
var context = scope.ServiceProvider.GetRequiredService<ContosoDbContext>();
context.Contoso.Add(new Contoso());
await context.SaveChangesAsync();
}
});
return Accepted();
}
The following highlighted code:
ContosoDbContext
from the correct scope.[HttpGet("/fire-and-forget-3")]
public IActionResult FireAndForget3([FromServices]IServiceScopeFactory
serviceScopeFactory)
{
_ = Task.Run(async () =>
{
await Task.Delay(1000);
await using (var scope = serviceScopeFactory.CreateAsyncScope())
{
var context = scope.ServiceProvider.GetRequiredService<ContosoDbContext>();
context.Contoso.Add(new Contoso());
await context.SaveChangesAsync();
}
});
return Accepted();
}
ASP.NET Core does not buffer the HTTP response body. The first time the response is written:
Do not do this: The following code tries to add response headers after the response has already started:
app.Use(async (context, next) =>
{
await next();
context.Response.Headers["test"] = "test value";
});
In the preceding code, context.Response.Headers["test"] = "test value";
will throw an exception if next()
has written to the response.
Do this: The following example checks if the HTTP response has started before modifying the headers.
app.Use(async (context, next) =>
{
await next();
if (!context.Response.HasStarted)
{
context.Response.Headers["test"] = "test value";
}
});
Do this: The following example uses HttpResponse.OnStarting
to set the headers before the response headers are flushed to the client.
Checking if the response has not started allows registering a callback that will be invoked just before response headers are written. Checking if the response has not started:
app.Use(async (context, next) =>
{
context.Response.OnStarting(() =>
{
context.Response.Headers["someheader"] = "somevalue";
return Task.CompletedTask;
});
await next();
});
Components only expect to be called if it's possible for them to handle and manipulate the response.
Using in-process hosting, an ASP.NET Core app runs in the same process as its IIS worker process. In-process hosting provides improved performance over out-of-process hosting because requests aren't proxied over the loopback adapter. The loopback adapter is a network interface that returns outgoing network traffic back to the same machine. IIS handles process management with the Windows Process Activation Service (WAS).
Projects default to the in-process hosting model in ASP.NET Core 3.0 and later.
For more information, see Host ASP.NET Core on Windows with IIS
HttpRequest.ContentLength
is null if the Content-Length
header is not received. Null in that case means the length of the request body is not known; it doesn't mean the length is zero. Because all comparisons with null (except ==
) return false, the comparison Request.ContentLength > 1024
, for example, might return false
when the request body size is more than 1024. Not knowing this can lead to security holes in apps. You might think you're protecting against too-large requests when you aren't.
For more information, see this StackOverflow answer.
See The Reliable Web App Pattern for.NET YouTube videos and article for guidance on creating a modern, reliable, performant, testable, cost-efficient, and scalable ASP.NET Core app, whether from scratch or refactoring an existing app.
ASP.NET Core feedback
ASP.NET Core is an open source project. Select a link to provide feedback:
Events
Mar 31, 11 PM - Apr 2, 11 PM
The ultimate Microsoft Fabric, Power BI, SQL, and AI community-led event. March 31 to April 2, 2025.
Register today