Chapter 16 — Testing .NET Application Performance
Retired Content |
---|
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. |
J.D. Meier, Srinath Vasireddy, Ashish Babbar, and Alex Mackman
Microsoft Corporation
May 2004
Related Links
Home Page for Improving .NET Application Performance and Scalability
Send feedback to Scale@microsoft.com
Summary: This chapter provides a step-by-step process that helps you to load test, stress test, and capacity test your Web application. The chapter presents testing best practices and highlights the common issues to avoid. It also shows how to simulate workload, and then measure and analyze results.
Objectives
Overview
How to Use This Chapter
Performance Testing
Goals of Performance Testing
Performance Objectives
Tools
Load Testing Process
Stress-Testing Process
Workload Modeling
Testing Considerations
Best Practices for Performance Testing
Metrics
Reporting
Analysis of Performance Data
Summary
Additional Resources
- Learn performance testing fundamentals.
- Perform load testing.
- Perform stress testing.
- Perform workload modeling.
- Identify testing best and worst practices.
Performance testing is used to verify that an application is able to perform under expected and peak load conditions, and that it can scale sufficiently to handle increased capacity. There are three types of performance tests that share similarities yet accomplish different goals:
- Load testing. Use load testing to verify application behavior under normal and peak load conditions. This allows you to verify that your application can meet your desired performance objectives; these performance objectives are often specified in a service level agreement. It enables you to measure response times, throughput rates, resource utilization levels, and to identify your application's breaking point, assuming that breaking point occurs below the peak load condition.
- Stress testing. Use stress testing to evaluate your application's behavior when it is pushed beyond the normal or peak load conditions. The goal of stress testing is to unearth application bugs that surface only under high load conditions. These can include such things as synchronization issues, race conditions, and memory leaks. Stress testing enables you to identify your application's weak points, and how it behaves under extreme load conditions.
- Capacity testing. Capacity testing is complementary to load testing and it determines your server's ultimate failure point, whereas load testing monitors results at various levels of load and traffic patterns. You perform capacity testing in conjunction with capacity planning. You use capacity planning to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads you need to know how many additional resources (such as CPU, RAM, disk space, or network bandwidth) are necessary to support future usage levels. Capacity testing helps you identify a scaling strategy to determine whether you should scale up or scale out. For more information, refer to "How To: Perform Capacity Planning for .NET Applications" in the "How To" section of this guide.
This chapter demonstrates an approach to performance testing that is particularly effective when combined with the other principles in this guide.
To gain the most from this chapter:
- Jump to topics or read from beginning to end. The main headings in this chapter help you locate the topics that interest you. Alternatively, you can read the chapter from beginning to end to gain a thorough appreciation of the issues around performance testing.
- Identify your stress test tool. To execute the performance test processes defined in this chapter, you need a stress test tool to simulate user load. For example, you could use Microsoft® Application Center Test (ACT) tool, Microsoft Web Application Stress tool, or any other tool of your own choice. ACT is included with Enterprise editions of the Microsoft Visual Studio® .NET development system. You can download the Microsoft Web Application Stress tool at https://www.iis.net/downloads/default.aspx?tabid=34&g;=6&i;=1298.
- Identify your scenarios. Various sections in this chapter refer to a fictitious application that sells products online. For example, the user can browse and search through products, add them to a shopping cart, and purchase them with a credit card. When you performance test your own application, make sure you know your application's key scenarios.
- Use the "How To" section. The "How To" section of this guide includes the following instructional articles referenced by this chapter:
Performance testing is the process of identifying how an application responds to a specified set of conditions and input. Multiple individual performance test scenarios (suites, cases, scripts) are often needed to cover all of the conditions and/or input of interest. For testing purposes, if possible, the application should be hosted on a hardware infrastructure that is representative of the live environment. By examining your application's behavior under simulated load conditions, you identify whether your application is trending toward or away from its defined performance objectives.
The main goal of performance testing is to identify how well your application performs in relation to your performance objectives. Some of the other goals of performance testing include the following:
- Identify bottlenecks and their causes.
- Optimize and tune the platform configuration (both the hardware and software) for maximum performance.
- Verify the reliability of your application under stress.
You may not be able to identify all the characteristics by running a single type of performance test. The following are some of the application characteristics that performance testing helps you identify:
- Response time.
- Throughput.
- Maximum concurrent users supported. For a definition of concurrent users, see "Testing Considerations," later in this chapter.
- Resource utilization in terms of the amount of CPU, RAM, network I/O, and disk I/O resources your application consumes during the test.
- Behavior under various workload patterns including normal load conditions, excessive load conditions, and conditions in between.
- Application breaking point. The application breaking point means a condition where the application stops responding to requests. Some of the symptoms of breaking point include 503 errors with a "Server Too Busy" message, and errors in the application event log that indicate that the ASPNET worker process recycled because of potential deadlocks.
- Symptoms and causes of application failure under stress conditions.
- Weak points in your application.
- What is required to support a projected increase in load. For example, an increase in the number of users, amount of data, or application activity might cause an increase in load.
Most of the performance tests depend on a set of predefined, documented, and agreed-upon performance objectives. Knowing the objectives from the beginning helps make the testing process more efficient. You can evaluate your application's performance by comparing it with your performance objectives.
You may run tests that are exploratory in nature to know more about the system without having any performance objective. But even these eventually serve as input to the tests that are conducted for evaluating performance against performance objectives.
Performance objectives often include the following:
- Response time or latency
- Throughput
- Resource utilization (CPU, network I/O, disk I/O, and memory)
- Workload
Response time is the amount of time taken to respond to a request. You can measure response time at the server or client as follows:
- Latency measured at the server. This is the time taken by the server to complete the execution of a request. This does not include the client-to-server latency, which includes additional time for the request and response to cross the network.
- Latency measured at the client. The latency measured at the client includes the request queue, plus the time taken by the server to complete the execution of the request and the network latency. You can measure the latency in various ways. Two common approaches are time taken by the first byte to reach the client (time to first byte, TTFB), or the time taken by the last byte of the response to reach the client (time to last byte, TTLB). Generally, you should test this using various network bandwidths between the client and the server.
By measuring latency, you can gauge whether your application takes too long to respond to client requests.
Throughput is the number of requests that can be served by your application per unit time. It can vary depending upon the load (number of users) and the type of user activity applied to the server. For example, downloading files requires higher throughput than browsing text-based Web pages. Throughput is usually measured in terms of requests per second. There are other units for measurement, such as transactions per second or orders per second.
Identify resource utilization costs in terms of server and network resources. The primary resources are:
- CPU
- Memory
- Disk I/O
- Network I/O
You can identify the resource cost on a per operation basis. Operations might include browsing a product catalog, adding items to a shopping cart, or placing an order. You can measure resource costs for a given user load, or you can average resource costs when the application is tested using a given workload profile.
A workload profile consists of an aggregate mix of users performing various operations. For example, for a load of 200 concurrent users (as defined below), the profile might indicate that 20 percent of users perform order placement, 30 percent add items to a shopping cart, while 50 percent browse the product catalog. This helps you identify and optimize areas that consume an unusually large proportion of server resources and response time.
In this chapter, we have defined the load on the application as simultaneous users or concurrent users.
Simultaneous users have active connections to the same Web site, whereas concurrent users hit the site at exactly the same moment. Concurrent access is likely to occur at infrequent intervals. Your site may have 100 to 150 concurrent users but 1,000 to 1,500 simultaneous users.
When load testing your application, you can simulate simultaneous users by including a random think time in your script such that not all the user threads from the load generator are firing requests at the same moment. This is useful to simulate real world situations.
However, if you want to stress your application, you probably want to use concurrent users. You can simulate concurrent users by removing the think time from your script.
For more information on workload modeling, see "Workload Modeling," later in this chapter.
There are tools available to help simulate load. You can simulate load in terms of users, connections, data, and in other ways. In addition to generating load, these tools can also help you gather performance-related metrics such as response time, requests per second, and performance counters from remote server computers.
Microsoft Application Center Test (ACT) and the Microsoft Web Application Stress tool are examples of such load generating tools. The ACT tool is included with Enterprise editions of Visual Studio .NET. You can download the Microsoft Web Application Stress Tool at https://www.iis.net/downloads/default.aspx?tabid=34&g;=6&i;=1298.
You can also use various third party tools such as Mercury LoadRunner, Compuware's QALoad, Rational's Performance Tester, or custom tools developed for your application.
You use load testing to verify application behavior under normal and peak load conditions. You incrementally increase the load from normal to peak load to see how your application performs with varying load conditions. You continue to increase the load until you cross the threshold limit for your performance objectives. For example, you might continue to increase the load until the server CPU utilization reaches 75 percent, which is your specified threshold. The load testing process lets you identify application bottlenecks and the maximum operating capacity of the application.
Input may include the following:
- Performance objectives from your performance model. For more information about Performance Modeling, see Chapter 2, "Performance Modeling."
- Application characteristics (scenarios).
- Workload characteristics.
- Performance objectives for each scenario.
- Test plans.
The load testing process is a six step process as shown in Figure 16.1.
Figure 16.1: The load testing process
The load testing process involves the following steps:
- Identify key scenarios. Identify application scenarios that are critical for performance.
- Identify workload. Distribute the total application load among the key scenarios identified in step 1.
- Identify metrics. Identify the metrics that you want to collect about the application when running the test.
- Create test cases. Create the test cases where you define steps for executing a single test along with the expected results.
- Simulate load. Use test tools to simulate load according to the test cases and to capture the result metrics.
- Analyze the results. Analyze the metric data captured during the test.
The next sections describe each of these steps.
Start by identifying your application's key scenarios. Scenarios are anticipated user paths that generally incorporate multiple application activities. Key scenarios are those for which you have specific performance goals or those that have a significant performance impact, either because they are commonly executed or because they are resource intensive. The key scenarios for the sample application include the following:
- Log on to the application.
- Browse a product catalog.
- Search for a specific product.
- Add items to the shopping cart.
- Validate credit card details and place an order.
Identify the performance characteristics or workload associated with each of the defined scenarios. For each scenario you must identify the following:
- Numbers of users. The total number of concurrent and simultaneous users who access the application in a given time frame. For a definition of concurrent users, see "Testing Considerations," later in this chapter.
- Rate of requests. The requests received from the concurrent load of users per unit time.
- Patterns of requests. A given load of concurrent users may be performing different tasks using the application. Patterns of requests identify the average load of users, and the rate of requests for a given functionality of an application.
For more information about how to create a workload model for your application, see "Workload Modeling," later in this chapter.
After you create a workload model, begin load testing with a total number of users distributed against your user profile, and then start to incrementally increase the load for each test cycle. Continue to increase the load, and record the behavior until you reach the threshold for the resources identified in your performance objectives. You can also continue to increase the number of users until you hit your service level limits, beyond which you would be violating your service level agreements for throughput, response time, and resource utilization.
Identify the metrics that you need to measure when you run your tests. When you simulate load, you need to know which metrics to look for and where to gauge the performance of your application. Identify the metrics that are relevant to your performance objectives, as well as those that help you identify bottlenecks. Metrics allow you to evaluate how your application performs in relation to performance objectives — such as throughput, response time, and resource utilization.
As you progress through multiple iterations of the tests, you can add metrics based upon your analysis of the previous test cycles. For example, if you observe that that your ASP.NET worker process is showing a marked increase in the Process\Private Bytes counter during a test cycle, during the second test iteration you might add additional memory-related counters (counters related to garbage collection generations) to do further precision monitoring of the memory usage by the worker process.
For more information about the types of metrics to capture for an ASP.NET application, see "Metrics," later in this chapter.
To evaluate the performance of your application in more detail and to identify the potential bottlenecks, monitor metrics under the following categories:
Network-specific metrics. This set of metrics provides information about the overall health and efficiency of your network, including routers, switches, and gateways.
System-related metrics. This set of metrics helps you identify the resource utilization on your server. The resources are CPU, memory, disk I/O, and network I/O.
Platform-specific metrics. Platform-specific metrics are related to software that is used to host your application, such as the .NET Framework common language runtime and ASP.NET-related metrics.
Application-specific metrics. These include custom performance counters inserted in your application code to monitor application health and identify performance issues. You might use custom counters to determine the number of concurrent threads waiting to acquire a particular lock or the number of requests queued to make an outbound call to a Web service.
Service level metrics. Service level metrics can help to measure overall application throughput and latency, or they might be tied to specific business scenarios as shown in Table 16.1.
Table 16.1: Sample Service Level Metrics for the Sample Application
Metric Value Orders / second 70 Catalogue Browse / second 130 Number of concurrent users 100
For a complete list of the counters that you need to measure, see "Metrics," later in this chapter.
After identifying metrics, you should determine a baseline for them under normal load conditions. This helps you decide on the acceptable load levels for your application. Baseline values help you analyze your application performance at varying load levels. An example is showed in Table 16.2.
Table 16.2: Acceptable Load Levels
Metric | Accepted level |
---|---|
% CPU Usage | Must not exceed 60% |
Requests / second | 100 or more |
Response time (TTLB) for client on 56 Kbps bandwidth | Must not exceed 8 seconds |
Document your various test cases in test plans for the workload patterns identified in Step 2. Two examples are shown in this section.
The test case for the sample e-commerce application used for illustration purposes in this chapter might define the following:
- Number of users: 500 simultaneous users
- Test duration: 2 hours
- Think time: Random think time between 1 and 10 seconds in the test script after each operation
Divide the users into various user profiles based on the workload identified in step 2. For the sample application, the distribution of load for various profiles could be similar to that shown in Table 16.3.
Table 16.3: Load Distribution
User scenarios | Percentage of users | Users |
---|---|---|
Browse | 50 | 250 |
Search | 30 | 150 |
Place order | 20 | 100 |
Total | 100 | 500 |
The expected results for the sample application might be defined as the following:
Throughput: 100 requests per second (ASP.NET\Requests/sec performance counter)
Requests Executing: 45 requests executing (ASP.NET\Requests Executing performance counter)
Avg. Response Time: 2.5 second response time (TTLB on 100 megabits per second [Mbps] LAN)
Resource utilization thresholds:
Processor\% Processor Time: 75 percent
Memory\Available MBytes: 25 percent of total physical RAM
Use tools such as ACT to run the identified scenarios and to simulate load. In addition to handling common client requirements such as authentication, cookies, and view state, ACT allows you to run multiple instances of the test at the same time to match the test case.
**Note **Make sure the client computers you use to generate load are not overly stressed. Resource utilization such as processor and memory should be well below the utilization threshold values.
For more information about using ACT for performance testing, see "How To: Use ACT to Test Performance and Scalability" in the "How To" section of this guide.
Analyze the captured data and compare the results against the metric's accepted level. The data you collect helps you analyze your application with respect to your application's performance objectives:
- Throughput versus user load.
- Response time versus user load.
- Resource utilization versus user load.
Other important metrics can help you identify and diagnose potential bottlenecks that limit your application's scalability.
To generate the test data, continue to increase load incrementally for multiple test iterations until you cross the threshold limits set for your application. Threshold limits may include service level agreements for throughput, response time, and resource utilization. For example, the threshold limit set for CPU utilization may be set to 75 percent; therefore, you can continue to increase the load and perform tests until the processor utilization reaches around 80 percent.
The analysis report that you generate at the end of various test iterations identifies your application behavior at various load levels. For a sample report, see the "Reporting" section later in this chapter.
If you continue to increase load during the testing process, you are likely to ultimately cause your application to fail. If you start to receive HTTP 500 (server busy) responses from the server, it means that your server's queue is full and that it has started to reject requests. These responses correspond to the 503 error code in the ACT stress tool.
Another example of application failure is a situation where the ASP.NET worker process recycles on the server because memory consumption has reached the limit defined in the Machine.config file or the worker process has deadlocked and has exceeded the time duration specified through the responseDeadlockInterval attribute in the Machine.config file.
You can identify bottlenecks in the application by analyzing the metrics data. At this point, you need to investigate the cause, fix or tune your application, and then run the tests again. Based upon your test analysis, you may need to create and run special tests that are very focused.
More Information
For more information about identifying architecture and design related issues in your application, see Chapter 4, "Architecture and Design Review of a .NET Application for Performance and Scalability"
For more information about identifying architecture and design-related issues in your application, see Chapter 13, "Code Review: .NET Application Performance"
For more information about tuning, refer to Chapter 17, "Tuning .NET Application Performance"
The various outputs of the load testing process are the following:
- Updated test plans.
- Behavior of your application at various load levels.
- Maximum operating capacity.
- Potential bottlenecks.
- Recommendations for fixing the bottlenecks.
Stress test your application by subjecting it to very high loads that are beyond the capacity of the application, while denying it the resources required to process that load. For example, you can deploy your application on a server that is running a processor-intensive application already. In this way, your application is immediately starved of processor resources and must compete with the other application for CPU cycles.
The goal of stress testing is to unearth application bugs that surface only under high load conditions. These bugs can include:
- Synchronization issues
- Race conditions
- Memory leaks
- Loss of data during network congestion
Unlike load testing, where you have a list of prioritized scenarios, with stress testing you identify a particular scenario that needs to be stress tested. There may be more than one scenario or there may be a combination of scenarios that you can stress test during a particular test run to reproduce a potential problem. You can also stress test a single Web page or even a single item, such as a stored procedure or class.
To perform stress testing, you need the following input:
- Application characteristics (scenarios)
- Potential or possible problems with the scenarios
- Workload profile characteristics
- Peak load capacity (obtained from load testing)
The stress testing process is a six-step process as shown in Figure 16.2.
Figure 16.2: Stress testing process
The process involves the following steps:
- Identify key scenario(s). Identify the application scenario or cases that need to be stress tested to identify potential problems.
- Identify workload. Identify the workload that you want to apply to the scenarios identified in step 1. This is based on the workload and peak load capacity inputs.
- Identify the metrics. Identify the metrics that you want to collect about the application. Base these metrics on the potential problems identified for the scenarios you identified in step 1.
- Create test cases. Create the test cases where you define steps for running a single test and your expected results.
- Simulate load. Use test tools to simulate the required load for each test case and capture the metric data results.
- Analyze the results. Analyze the metric data captured during the test.
The next sections describe each of these steps.
Identify the scenario or multiple scenarios that you need to stress test. Generally, you start by defining a single scenario that you want to stress test to identify a potential performance issue. There are a number of ways to choose appropriate scenarios:
- Select scenarios based on how critical they are to overall application performance.
- Try to test those operations that are most likely to affect performance. These might include operations that perform intensive locking and synchronization, long transactions, and disk-intensive I/O operations.
- Base your scenario selection on the data obtained from load testing that identified specific areas of your application as potential bottlenecks. While you should have fine-tuned and removed the bottlenecks after load testing, you should still stress test the system in these areas to verify how well your changes handle extreme stress levels.
Scenarios that may need to be stress tested separately for a typical e-commerce application include the following:
- An order processing scenario that updates the inventory for a particular product. This functionality has the potential to exhibit locking and synchronization problems.
- A scenario that pages through search results based on user queries. If a user specifies a particularly wide query, there could be a large impact on memory utilization. For example, memory utilization could be affected if a query returns an entire data table.
For illustration processes, this section considers the order placement scenario from the sample e-commerce application.
The load you apply to a particular scenario should stress the system sufficiently beyond threshold limits. You can incrementally increase the load and observe the application behavior over various load conditions.
As an alternative, you can start by applying an anti-profile to the workload model from load testing. The anti-profile has the workload distributions inverted for the scenario under consideration. For example, if the normal load for the place order scenario is 10 percent of the total workload, the anti-profile is 90 percent of the total workload. The remaining load can be distributed among the other scenarios. This can serve as a good starting point for your stress tests.
You can continue to increase the load beyond this starting point for stress testing purposes. Configure the anti-profile needs in such a way that the load for the identified scenario is deliberately increased beyond the peak load conditions.
For example, consider the normal workload profile identified for a sample application that is shown in Table 16.4.
Table 16.4: Sample Workload Profile for a Sample Application
User profile | Percentage | Simultaneous users |
---|---|---|
Browse | 50% | 500 |
Search | 30% | 300 |
Order | 20% | 200 |
Total | 100% | 1,000 |
The anti-profile used to stress test the order placement scenario is shown in Table 16.5.
Table 16.5: Order Use Case Anti-Profile
User profile | Percentage | Simultaneous users |
---|---|---|
Browse | 10% | 100 |
Search | 10% | 100 |
Order | 80% | 800 |
Total | 100% | 1,000 |
Continue to increase the load for this scenario, and observe the application response at various load levels. The time duration of the tests depends upon the scenario. For example, it might take 10 to 15 days to simulate deadlocks on a four-processor machine, but the same deadlocks may be simulated on an eight-processor machine in half that time.
Identify the metrics corresponding to the potential pitfalls for each scenario. The metrics can be related to both performance and throughput goals, in addition to ones that provide information about the potential problems. This might include custom performance counters that have been embedded in the application. For example, this might be a performance counter used to monitor the memory usage of a particular memory stream used to serialize a DataSet.
For the place order scenario, where contention-related issues are the focus, you need to measure the metrics that report contention in addition to the basic set of counters. Table 16.6 shows the metrics that have been identified for the place order scenario.
Table 16.6: Metrics to Measure During Stress Testing of the Place Order Scenario
Object | Counter | Instance |
---|---|---|
Base set of metrics | ||
Processor | % Processor Time | _Total |
Process | Private Bytes | aspnet_wp |
Memory | Available MBytes | N/A |
ASP.NET | Requests Rejected | N/A |
ASP.NET | Request Execution Time | N/A |
ASP.NET Applications | Requests/Sec | Your virtual dir |
Contention-related metrics | ||
.NET CLR LocksAndThreads | Contention Rate / sec | aspnet_wp |
.NET CLR LocksAndThreads | Current Queue Length | aspnet_wp |
Document your test cases for the various workload patterns identified in step 2. For example:
Test 1 – Place order scenario:
1,000 simultaneous users.
Use a random think time between 1 and 10 seconds in the test script after each operation.
Run the test for 2 days.
Test 1 – Expected results:
The ASP.NET worker process should not recycle because of deadlock.
Throughput should not fall below 35 requests per second.
Response time should not be greater than 7 seconds.
Server busy errors should not be more than 10 percent of the total response because of contention-related issues.
Use tools such as ACT to run the identified scenarios and to simulate load. This allows you to capture the data for your metrics.
**Note **Make sure that the client computers that you use to generate load are not overly stressed. Resource utilization, such as processor and memory, should not be very high.
Analyze the captured data and compare the results against the metric's accepted level. If the results indicate that your required performance levels have not been attained, analyze and fix the cause of the bottleneck. To address the issue, you might need to do one of the following:
- Perform a design review. For more information, see Chapter 4, "Architecture and Design Review of a .NET Application for Performance and Scalability."
- Perform a code review. For more information, see Chapter 13, "Code Review: .NET Application Performance."
- Examine the stack dumps of the worker process to diagnose the exact cause of deadlocks.
For example, to reduce the contention for the place order scenario of the sample application, you can implement queues into which orders are posted and are processed serially rather than processing them immediately after the order is placed.
More Information
- For more information on tools and techniques for debugging, see "Production Debugging for .NET Framework Applications," at https://msdn.microsoft.com/en-us/library/ms954594.aspx.
- For more information on debugging deadlocks, see "Scenario: Contention or Deadlock Symptoms" in "Production Debugging for .NET Framework Applications," at https://msdn.microsoft.com/en-us/library/ms954592.aspx.
- For more information on deadlocks, see Microsoft Knowledge Base article 317723, "INFO: What Are Race Conditions and Deadlocks?" at https://support.microsoft.com/default.aspx?scid=kb;en-us;317723.
Workload modeling is the process of identifying one or more workload profiles to be simulated against the target test systems. Each workload profile represents a variation of the following attributes:
- Key scenarios.
- Numbers of simultaneous users.
- Rate of requests.
- Patterns of requests.
The workload model defines how each of the identified scenarios is executed. It also defines approximate data access patterns and identifies user types and characteristics.
You can determine patterns and call frequency by using either the predicted usage of your application based on market analysis, or if your application is already in production, by analyzing server log files. Some important questions to help you determine the workload for your application include:
What are your key scenarios?
Identify the scenarios that are critical for your application from a performance perspective. You should capture these scenarios as a part of the requirements analysis performed in the very early stages of your application development life cycle.
What is the maximum expected number of users logged in to your application?
Simultaneous users are users who have active connections to the same Web site. This represents the maximum operating capacity for your application. For the sample e-commerce application, assume 1,000 simultaneous users. If you are developing a new application, you can obtain these numbers by working with the marketing team and using the results of the team's analysis. For existing applications, you can identify the number of simultaneous users by analyzing your Internet Information Services (IIS) logs.
What is the possible set of actions that a user can perform?
This depends upon the actions a user can perform when he or she is logged into the application. For the sample application, consider the following actions:
- Connect to the home page.
- Log on to the application.
- Browse the product catalog.
- Search for specific products.
- Add products to the shopping cart.
- Validate and place an order.
- Log out from the application.
What are the various user profiles for the application?
You can group together the various types of users and the actions they perform. For example, there are groups of users who simply connect and browse the product catalog, and there are other users who log on, search for specific products, and then log out. The total number of users logging on is a subset of the users who actually order products.
Based on this approach, you can classify your user profiles. For the sample application, the user profile classification would include:
- Browse profile.
- Search profile.
- Place order profile.
What are the operations performed by an average user for each profile?
The operations performed by an average user for each profile are based on marketing data for a new application and IIS log analysis for an existing one. For example, if you are developing a business-to-consumer e-commerce site, your research-based marketing data tells you that on an average, a user who buys products buys at least three products from your site in a single session.
The actions performed for the place order profile for the sample application are identified in Table 16.7.
Table 16.7: Actions Performed for the Place Order Profile
Operation Number of times performed in a single session Connect to the home page. 1 Log on to the application. 1 Browse the product catalogue. 4 Search for specific products. 2 Add products in the shopping cart. 3 Validate and place order. 1 Log off from the application. 1 A typical user for the place order profile might not search your site, but you can assume such actions by averaging out the activity performed by various users for a specific profile. You can then create a table of actions performed by an average user for all the profiles for your application.
What is the average think time between requests?
Think time is the time spent by the user between two consecutive requests. During this time, the user views the information displayed on a page or enters details such as credit card numbers, depending on the nature of operation being performed.
Think time can vary depending on the complexity of the data on a page. For example, think time for the logon page is less than the think time for an order placement page where the user must enter credit card details. You can average out the think time for all the requests.
For more information about think time, see "Testing Considerations," later in this chapter.
What is the expected user profile mix?
The usage pattern for each scenario gives an idea in a given time frame of the percentage mix of business actions performed by users. An example is shown in Table 16.8.
Table 16.8: Sample User Profile Over a Period of Time
User profile Percentage Simultaneous users Browse 50% 500 Search 30% 300 Order 20% 200 Total 100% 1,000 What is the duration for which the test needs to be executed?
The test duration depends on the workload pattern for your application and may range from 20 minutes to as long as a week. Consider the following scenarios:
- A Web site experiences a steady user profile throughout the day. For example, if you host an online e-commerce site, the operations performed by an average user visiting the site are unlikely to vary during the course of an 8-to-10 hour business day. The test script for such a site should not vary the profile of users over the test period. The tests are executed simply by varying the number of users for each test cycle. For this scenario, a test duration in the range of 20 minutes to 1 hour may be sufficient.
- A Web site experiences a varying profile of users on a given day. For example, a stock brokerage site might experience more users buying (rather than selling) stocks in the morning and the opposite in the evening. The test duration for this type of application may range between 8 and 16 hours.
- You may even run tests continuously for a week or multiple weeks with a constant load to observe how your application performs over a sustained period of activity. This helps you identify any slow memory leaks over a period of time.
Your test environment should be capable of simulating production environment conditions. To do this, keep the following considerations in mind during the test cycle:
- Do not place too much stress on the client.
- Create baselines for your test setup.
- Allow for think time in your test script.
- Consider test duration.
- Minimize redundant requests during testing.
- Consider simultaneous versus concurrent users.
- Set an appropriate warm up time.
The next sections describe each of these considerations.
Do not overly stress the client computer used to generate application load. The processor and memory usage on your client computers should be well below the threshold limit (CPU: 75 percent). Otherwise, the client computer may end up as a bottleneck during your testing. To avoid this, consider distributing the load on multiple client computers. Also, monitor the network interface bandwidth utilization to ensure that the network is not getting congested.
Create baselines for your test setup for all types of tests you need to perform. The setup should be representative of the real life environment. This has two advantages. First, the results from various categories of tests do not reflect the type of hardware you are using. This means that your application modifications are the only variable. Second, you can depend on the test results because the test setup closely mirrors the real life environment.
Think time reflects the amount of time that a typical user is likely to pause for thought while interacting with your application. During this time, the user views the information displayed on the page or enters details such as credit card numbers or addresses. You should average the think time across various operations.
If you do not include think time in your script, there is no time gap between two subsequent requests on a per-user basis. This directly translates to all users firing requests to the server concurrently. So, for a concurrent load of 200 users, there will be 200 requests fired at a given instance to the server. This is not a true representation of how your typical users use your application.
**Note **Omitting think time can be helpful when you want to generate excessive load on the server.
Make sure that the think time in your test script is based on real life conditions. This varies according to the operations performed by the user, and it varies depending on the page the user is interacting with.
For your test scripts, you can program for either a fixed think time between consecutive requests or a random think time ranging between minimum and maximum values.
You should base the time duration for your tests on the end goal. If the goal is to load test and monitor the application behavior for your workload pattern, the test duration might range from 20 minutes to as long as one week. If the site is expected to experience users of similar profile, and the average user is expected to perform the same set of operations during the intraday activity, a test of 20 minutes to one hour is sufficient for generating data for load testing. You may want to run load tests for an extended period — four to five days — to see how your application performs on the peak operating capacity for a longer duration of time.
However, to generate test data for your site if your site expects users of different profiles during various hours of operation, you may need to test for at least eight to 10 hours to simulate various user profiles in the workload pattern.
For stress testing purposes, the end goal is to run tests to identify potential resource leaks and the corresponding degradation in application performance. This may require a longer duration, ranging from a couple of hours to a week, depending on the nature of the bottleneck.
For tests used to measure the transaction cost of an operation using transaction cost analysis, you might need to run test for only approximately 20 minutes.
More Information
For more information about capacity planning and transaction cost analysis, see "How To: Perform Capacity Planning for .NET Framework Applications" in the "How To" section of this guide.
Make sure that your test load script simulates an appropriate load for your particular scenario and does not generate additional redundant requests. For example, consider a logon scenario. The complete operation typically consists of two requests:
- A GET request used to retrieve the initial page where the user supplies a logon name and password.
- A POST request when the user clicks the Logon button to verify the credentials.
The GET request is the same for all users and can easily be cached and served to all users. However, it is the POST request that is critical from a performance perspective. In this example, your test script should avoid sending a GET request before submitting a POST request. By avoiding the GET request within your test script, the client threads of the load generator are able to iterate faster through the loop for a given set of users, thereby generating more POST requests. This results in a more effective stress test for the actual POST operation. However, there may be conditions when even the response to a GET request is customized for a user; therefore, you should consider including both the GET and POST requests in the stress tests.
Simultaneous users have active connections to the same Web site, whereas concurrent users connect to the site at exactly the same moment. Concurrent access is likely to occur at infrequent intervals. Your site may have 100 to 150 concurrent users but 1,000 to 1,500 simultaneous users.
To stress test your application by using tools such as ACT, use a think time of zero in your test scripts. This allows you to stress your application without any time lag, with all users concurrently accessing the site. By using an appropriate think time, however, you can more accurately mirror a real life situation in which a user takes some time before submitting a new request to the server.
High numbers of simultaneous users tend to produce spikes in your resource usage and can often cause interaction that is beyond the concurrent load a server can handle. This results in occasional "server busy" errors. Tests with simultaneous users are very useful because they help you identify the actual load your system can handle without sending back too many server busy errors.
ACT supports warm-up times. The warm-up time is used in a test script to ensure your application reaches a steady state before the test tool starts to record results. The warm-up time causes ACT to ignore the data from the first few seconds of a test run. This is particularly important for ASP.NET applications because the first few requests trigger just-in-time (JIT) compilation and caching.
The warm up time is particularly relevant for short duration tests, so that the initial startup time does not skew the test results.
To determine an appropriate warm-up time for your application, use the ASP.NET Applications\Compilations Total counter to measure the effects of JIT compilation. This counter should increase every time a user action triggers the JIT complier.
In some cases you may want to know how long it takes to compile and cache. It should be a separate test; it should not be averaged into your steady state measurements.
When you test, consider the following best practices.
When performing performance testing, make sure you do the following:
- Clear the application and database logs after each performance test run. Excessively large log files may artificially skew the performance results.
- Identify the correct server software and hardware to mirror your production environment.
- Use a single graphical user interface (GUI) client to capture end-user response time while a load is generated on the system. You may need to generate load by using different client computers, but to make sense of client-side data, such as response time or requests per second, you should consolidate data at a single client and generate results based on the average values.
- Include a buffer time between the incremental increases of users during a load test.
- Use different data parameters for each simulated user to create a more realistic load simulation.
- Monitor all computers involved in the test, including the client that generates the load. This is important because you should not overly stress the client.
- Prioritize your scenarios according to critical functionality and high-volume transactions.
- Use a zero think time if you need to fire concurrent requests,. This can help you identify bottleneck issues.
- Stress test critical components of the system to assess their independent thresholds.
- Do not allow the test system resources to cross resource threshold limits by a significant margin during load testing, because this distorts the data in your results.
- Do not run tests in live production environments that have other network traffic. Use an isolated test environment that is representative of the actual production environment.
- Do not try to break the system during a load test. The intent of the load test is not to break the system. The intent is to observe performance under expected usage conditions. You can stress test to determine the most likely modes of failure so they can be addressed or mitigated.
- Do not place too much stress on the client test computers.
The metrics that you need to capture vary depending on the server role. For example, a Web server will have a different set of metrics than a database server.
The metrics in this section are divided into the following categories:
- Metrics for all servers
- Web server–specific metrics
- SQL Server–specific metrics
The next sections describe each of these categories.
Table 16.9 lists the set of metrics that you should capture on all your servers. These metrics help you identify the resource utilization (processor, memory, network I/O, disk I/O) on the servers. For more information on the metrics, see Chapter 15, "Measuring .NET Application Performance."
Table 16.9: Metrics to Be Measured on All Servers
Object | Counter | Instance |
---|---|---|
Network | ||
Network Interface | Bytes Received/sec | Each NIC card |
Network Interface | Bytes Sent/sec | Each NIC card |
Network Interface | Packets Received Discarded | Each NIC card |
Network Interface | Packets Outbound Discarded | Each NIC card |
Processors | ||
Processor | % Processor Time | _Total |
Processor | % Interrupt Time | _Total |
Processor | % Privileged Time | _Total |
System | Processor Queue Length | N/A |
System | Context Switches/sec | N/A |
Memory | ||
Memory | Available MBytes | N/A |
Memory | Pages/sec | N/A |
Memory | Cache Faults/sec | N/A |
Server | Pool Nonpaged Failures | N/A |
Process | ||
Process | Page Faults / sec | Total |
Process | Working Set | (Monitored process) |
Process | Private Bytes | (Monitored process) |
Process | Handle Count | (Monitored process) |
You can either capture a small set of metrics for the Web server that lets you evaluate the application performance with respect to your performance goals, or you can capture a bigger set of counters that helps you identify potential bottlenecks for your application.
Table 16.10 shows a set of counters that helps you evaluate your application performance with respect to goals. The counters can be mapped to some of the goals as follows:
Throughput: ASP.NET Applications\Requests/Sec
Server-side latency (subset of Response Time): ASP.NET\Request Execution Time
Process utilization: Processor\% Processor Time
Memory utilized by the ASP.NET worker process: Process\Private Bytes (aspnet_wp)
?Free memory available for the server: Memory\Available MBytes
Table 16.10: Metrics for Application Performance Goals
Object Counter Instance ASP.NET Applications Requests/Sec Application virtual directory ASP.NET Request Execution Time N/A ASP.NET Applications Requests In Application Queue Application virtual directory Processor % Processor Time _Total Memory Available MBytes N/A Process Private Bytes aspnet_wp The set of metrics shown in Table 16.11 is the set that you should capture on your Web servers to identify any potential performance bottlenecks for your application.
Table 16.11: Web Server–Specific Metrics
Object Counter Instance ASP.NET Requests Current N/A ASP.NET Requests Queued N/A ASP.NET Requests Rejected N/A ASP.NET Request Execution Time N/A ASP.NET Request Wait Time N/A ASP.NET Applications Requests/Sec Your virtual dir ASP.NET Applications Requests Executing Your virtual dir ASP.NET Applications Requests In Application Queue Your virtual dir ASP.NET Applications Requests Timed Out Your virtual dir ASP.NET Applications Cache Total Hit Ratio Your virtual dir ASP.NET Applications Cache API Hit Ratio Your virtual dir ASP.NET Applications Output Cache Hit Ratio Your virtual dir ASP.NET Applications Errors Total/sec Your virtual dir ASP.NET Applications Pipeline Instance Count Your virtual dir .NET CLR Memory % Time in GC Aspnet_wp .NET CLR Memory # Bytes in all Heaps Aspnet_wp .NET CLR Memory # of Pinned Objects Aspnet_wp .NET CLR Memory Large Object Heap Size Aspnet_wp .NET CLR Exceptions # of Exceps Thrown /sec Aspnet_wp .NET CLR LocksAndThreads Contention Rate / sec aspnet_wp .NET CLR LocksAndThreads Current Queue Length aspnet_wp .NET CLR Data SqlClient: Current # connection pools .NET CLR Data SqlClient: Current # pooled connections Web Service ISAPI Extension Requests/sec Your Web site
The set of metrics shown in Table 16.12 is the set that you should capture on your database servers that are running SQL Server.
Table 16.12: SQL Server–Specific Metrics
Object | Counter | Instance |
---|---|---|
SQL Server: General Statistics | User Connections | N/A |
SQL Server: Access Methods | Index Searches/sec | N/A |
SQL Server: Access Methods | Full Scans/sec | N/A |
SQL Server: Databases | Transactions/sec | (Your Database) |
SQL Server: Databases | Active Transactions | (Your Database) |
SQL Server: Locks | Lock Requests/sec | _Total |
SQL Server: Locks | Lock Timeouts/sec | _Total |
SQL Server: Locks | Lock Waits/sec | _Total |
SQL Server: Locks | Number of Deadlocks/sec | _Total |
SQL Server: Locks | Average Wait Time (ms) | _Total |
SQL Server: Latches | Average Latch Wait Time (ms) | N/A |
SQL Server: Cache Manager | Cache Hit Ratio | _Total |
SQL Server: Cache Manager | Cache Use Counts/sec | _Total |
Disk I/O | ||
PhysicalDisk | Avg. Disk Queue Length | (disk instance) |
PhysicalDisk | Avg. Disk Read Queue Length | (disk instance) |
PhysicalDisk | Avg. Disk Write Queue Length | (disk instance) |
PhysicalDisk | Avg. Disk sec/Read | (disk instance) |
PhysicalDisk | Avg. Disk sec/Transfer | (disk instance) |
PhysicalDisk | Avg. Disk Bytes/Transfer | (disk instance) |
PhysicalDisk | Disk Writes/sec | (disk instance) |
This section shows a report template that you can use to create load test reports for your applications. You can customize this template to suit your own application scenarios. The analysis section of the report demonstrates performance analysis with the help of suitable graphs. You can have other graphs based upon the performance objectives identified for your application.
Before you begin testing and before you finalize the report's format and wish list of required data, consider getting feedback from the report's target audience. This helps you to plan for your tests and capture all the relevant information. Table 16.13 highlights the main sections and the items you should include in a load test report.
Table 16.13: Load Test Report Template
Section | Item | Details |
---|---|---|
Hardware details | Web server(s) | Processor: 2 gigahertz (GHz) (dual processor)
Memory: 1 GB RAM Number of servers: 2 Load balancing: Yes |
Server(s) running SQL Server | Processor: 2 GHZ (dual processor)
Memory: 1 GB RAM Number of servers: 1 Load balancing: No |
|
Client(s) | Memory: 1 GB RAM
Number of servers: 2 |
|
Network | Total Bandwidth for the setup: 100 megabits per second (Mbps)
Network Bandwidth between client and server: 100 Mbps |
|
Software details | Web server(s) | Operating system: Microsoft® Windows 2000® Advanced Server Service Pack SP4
Web server: IIS 5.0 Platform: .NET Framework 1.1 |
Servers running SQL Server | Operating system: Windows 2000 Advanced Server Service Pack SP4
Database Server: SQL Server 2000 Service Pack SP3 |
|
Client(s) | Tool used: Microsoft Application Center Test | |
Configuration details | IIS configuration | Session time out:
Http-Keep alive: |
Machine.Config configuration | MaxConnections:
MaxWorkerThreads: MaxIOThreads MinFreeThreads MinLocalRequestFreeThreads: executionTimeOut: |
|
Web.Config configuration | <compilation debug="false"/>
<authentication mode="Windows" /> <trace enabled="false" requestLimit="10" pageOutput="false" traceMode="SortByTime" localOnly="true" /> <sessionState mode="InProc" timeout="20"/> |
|
Application-specific configuration | You can include application specific configuration here such as custom attributes added to the configuration file. |
Table 16.14 shows the workload profile for the sample e-commerce application.
Table 16.14: Sample Application Workload Profile
User profile | Percentage |
---|---|
Browse | 50 percent |
Search | 30 percent |
Order | 20 percent |
Total | 100 percent |
Metric | Value |
Number of simultaneous users | 100 – 1,600 users |
Total number tests | 16 |
Test duration | 1 hour |
Think time | random time of 10 seconds |
Table 16.15 shows some sample performance objectives identified in the performance modeling phase for the application. For more information on performance modeling, see Chapter 2, "Performance Modeling."
Table 16.15: Performance Objectives
Performance Objective | Metric |
---|---|
Throughput and response time | Requests/second:
Response time/TTLB: |
System performance
(for each server) |
Processor utilization:
Memory: Disk I/O (for database server): |
Application-specific metrics
(custom performance counters) |
Orders/second:
Searches/second: |
Table 16.16 lists typical Web server metrics.
Table 16.16: Web Server Metrics
Object: Counter | Value
100 users |
Value
200 users |
---|---|---|
Network | ||
Network Interface: Bytes Received/sec | ||
Network Interface: Bytes Sent/sec | ||
Network Interface: Packets Received Discarded | ||
Network Interface: Packets Outbound Discarded | ||
Processors | ||
Processor: % Processor Time | ||
Processor: % Interrupt Time | ||
Processor: % Privileged Time | ||
System: Processor Queue Length | ||
System: Context Switches/sec | ||
Memory | ||
Memory: Available MBytes | ||
Memory: Pages/sec | ||
Memory: Cache Faults/sec | ||
Server: Pool Nonpaged failures | ||
Process | ||
Process: Page Faults / sec | ||
Process: Working Set | ||
Process: Private Bytes | ||
Process: Handle Count | ||
ASP.NET | ||
ASP.NET: Requests Current | ||
ASP.NET: Requests Queued | ||
ASP.NET: Requests Rejected | ||
ASP.NET: Request Execution Time | ||
ASP.NET: Requests Wait Time | ||
ASP.NET Applications: Requests/Sec | ||
ASP.NET Applications: Requests Executing | ||
ASP.NET Applications: Requests In Application Queue | ||
ASP.NET Applications: Requests Timed Out | ||
ASP.NET Applications: Cache Total Hit Ratio | ||
ASP.NET Applications: Cache API Hit Ratio | ||
ASP.NET Applications: Output Cache Hit Ratio | ||
ASP.NET Applications: Errors Total/sec | ||
ASP.NET Applications: Pipeline Instance Count | ||
.NET CLR Memory: % Time in GC | ||
.NET CLR Memory: # Bytes in all Heaps | ||
.NET CLR Memory: # of Pinned Objects | ||
.NET CLR Memory: Large Object Heap Size | ||
.NET CLR Exceptions: # of Exceps Thrown /sec | ||
.NET CLR LocksAndThreads: Contention Rate / sec | ||
.NET CLR LocksAndThreads: Current Queue Length | ||
.NET CLR Data: SqlClient: Current # connection pools | ||
.NET CLR Data: SqlClient: Current # pooled connections | ||
Web Service: ISAPI Extension Requests/sec |
Table 16.17 lists typical SQL Server metrics.
Table 16.17: SQL Server Metrics
Object: Counter | Values
100 users |
Values
200 users |
---|---|---|
SQL Server: General Statistics: User Connections | ||
SQL Server Access Methods: Index Searches/sec | ||
SQL Server Access Methods: Full Scans/sec | ||
SQL Server: Databases: Transactions/sec | ||
SQL Server: Databases: Active Transactions | ||
SQL Server: Locks: Lock Requests/sec | ||
SQL Server: Locks: Lock Timeouts/sec | ||
SQL Server: Locks: Lock Waits/sec | ||
SQL Server: Locks: Number of Deadlocks/sec | ||
SQL Server: Locks: Average Wait Time (ms) | ||
SQL Server: Latches: Average Latch Wait Time(ms) | ||
SQL Server: Cache Manager: Cache Hit Ratio | ||
SQL Server: Cache Manager: Cache Use Counts/sec | ||
Disk I/O | ||
PhysicalDisk: Avg. Disk Queue Length | ||
PhysicalDisk: Avg. Disk Read Queue Length | ||
PhysicalDisk: Avg. Disk Write Queue Length | ||
PhysicalDisk: Avg. Disk sec/Read | ||
PhysicalDisk: Avg. Disk sec/Transfer | ||
PhysicalDisk: Avg. Disk Bytes/Transfer | ||
PhysicalDisk: Disk Writes/sec |
After you capture and consolidate your results, analyze the captured data and compare the results against the metric's accepted level. If the results indicate that your required performance levels have not been attained, you can analyze and fix the cause of the bottleneck. The data that you collect helps you analyze your application with respect to your application's performance objectives:
- Throughput versus user load
- Response time versus user load
- Resource utilization versus user load
The next sections describe each of these data comparisons.
The throughput versus user load graph for the test results is shown in Figure 16.3.
Figure 16.3: Throughput versus user load graph
The graph identifies the point-of-service level failure. This point represents the user load the sample application can handle to meet service level requirements for requests per second.
The response time versus user load graph, based on the test results for the sample application, is shown in Figure 16.4.
Figure 16.4: Response time versus user load graph
The response time is essentially flat with a gentle rise and linear growth for low to medium levels of load. After queuing on the server becomes excessive, the response time begins to increase suddenly. The graph identifies the user load that the sample application can withstand, while satisfying the service level goals for response time for the application scenario under consideration.
The graph in Figure 16.5 shows the variation of processor utilization with user load.
Figure 16.5: Processor versus user load graph
This graph identifies the workload for the sample application that is below the threshold limit set for processor utilization.
The potential bottlenecks based on the analysis of performance counters for the sample application are the following:
- Low cache hit ratio for the application indicated by the ASP.NET Applications: Cache Total Hit Ratio performance counter.
- A significant amount of time is spent by the garbage collector in cleanup (indicated by .NET CLR Memory\% Time in GC). This indicates inefficient cleanup of resources.
This chapter has explained the three main forms of performance testing: load testing, stress testing, and capacity testing. It has also presented processes for load testing and stress testing. Use these processes together with the other guidance and recommendations in this chapter to plan, develop, and run testing approaches for your applications.
By acting appropriately on the test output, you can fine tune your application and remove bottlenecks to improve performance, and you can ensure that your application is able to scale to meet future capacity demands.
For more information about testing performance, see the following resources in this guide:
- See Chapter 15, "Measuring .NET Application Performance"
- See Chapter 17, "Tuning .NET Application Performance"
See the following How Tos in the "How To" section of this guide:
- See "How To: Use ACT to Test Performance and Scalability"
- See "How To: Use ACT to Test Web Services Performance"
- See "How To: Perform Capacity Planning for .NET Framework Applications"
For further reading, see the following resources:
- Performance Testing Microsoft .NET Web Applications, MS Press®.
- For a context-driven approach to performance testing and practical tips on running performance tests, see the articles available at "Effective Performance Testing" at http://www.perftestplus.com/pubs.htm.
- For a systematic, quantitative approach to performance tuning that helps you find problems quickly, identify potential solutions, and prioritize your efforts, see "Five Steps to Solving Software Performance Problems," by Lloyd G. Williams, Ph.D., and Connie U. Smith, Ph.D., at http://www.perfeng.com/papers/step5.pdf.
- For techniques and strategies for building a collaborative relationship between test and development around performance tuning, see "Part 11: Collaborative Tuning" from "Beyond Performance Testing" by Scott Barber at http://www.perftestplus.com/resources/BPT11.pdf.
- For insight into bottleneck identification and analysis, see "Part 7: Identify the Critical Failure or Bottleneck" from "Beyond Performance Testing" by Scott Barber at http://www.perftestplus.com/resources/BPT7.pdf.
Retired Content |
---|
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. |