Chapter 4 — Architecture and Design Review of a .NET Application for Performance and Scalability
Retired Content |
---|
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. |
J.D. Meier, Srinath Vasireddy, Ashish Babbar, Rico Mariani, and Alex Mackman
Microsoft Corporation
May 2004
Related Links
Home Page for Improving .NET Application Performance and Scalability
Chapter 3 — Design Guidelines for Application Performance
Checklist: Architecture and Design Review for Performance and Scalability
Send feedback to Scale@microsoft.com
Summary: This chapter provides a comprehensive set of performance and scalability questions to consider when you review the architecture and design of your application. These questions help you identify and eliminate poor design decisions up front in the application life cycle.
Objectives
Overview
How to Use This Chapter
Architecture and Design Review Process
Deployment and Infrastructure
Coupling and Cohesion
Communication
Concurrency
Resource Management
Caching
State Management
Data Structures and Algorithms
Data Access
Exception Handling
Class Design Considerations
Summary
Additional Resources
- Analyze and review the performance and scalability aspects of application architecture and design.
- Learn what to look for and the key questions to ask when reviewing existing and new application architecture and design.
The performance characteristics of your application are determined by its architecture and design. Your application must have been architected and designed with sound principles and best practices. No amount of fine-tuning your code can disguise the performance implications resulting from bad architecture or design decisions.
This chapter starts by introducing a high-level process you can follow for your architecture and design reviews. It then presents deployment and infrastructure considerations, followed by a comprehensive set of questions that you can use to help drive your application reviews. Review questions are presented in sections that are organized by the performance and scalability frame introduced in Chapter 1, "Fundamentals of Engineering for Performance."
This chapter presents a series of questions that you should use to help perform a thorough review of your application architecture and design. There are several ways to get the most from this chapter:
- Jump to topics or read from beginning to end. The main headings in this chapter help you locate the topics that interest you. Alternatively, you can read this chapter from beginning to end to gain a thorough appreciation of performance and scalability design issues.
- Integrate performance and scalability review into your design process. Start reviewing as soon as possible. As your design evolves, review the changes and refinements by using the questions presented in this chapter.
- Know the key performance and scalability principles. Read Chapter 3, "Design Guidelines for Application Performance" to learn about the overarching principles that help you design Web applications that meet your performance and scalability objectives. It is important to know these fundamental principles to improve the results of your review process.
- Evolve your performance and scalability review. This chapter provides the questions you should ask to improve the performance and scalability of your design. To complete the process, it is highly likely that you will need to add specific questions that are unique to your application.
- Use the accompanying checklist in the "Checklists" section of this guide. Use the "Checklist: Architecture and Design Review for Performance and Scalability" checklist to quickly view and evaluate the guidelines presented in this chapter.
The review process analyzes the performance implications of your application's architecture and design. If you have just completed your application's design, the design documentation can help you with this process. Regardless of how comprehensive your design documentation is, you must be able to decompose your application and be able to identify key items, including boundaries, interfaces, data flow, caches, and data stores. You must also know the physical deployment configuration of your application.
Consider the following aspects when you review the architecture and design of your application:
- Deployment and infrastructure. You review the design of your application in relation to the target deployment environment and any associated restrictions that might be imposed by company or institutional policies.
- Performance and scalability frame. Pay particular attention to the design approaches you have adopted for those areas that most commonly exhibit performance bottlenecks. This guide refers to these collectively as the performance and scalability frame.
- Layer by layer analysis. You walk through the logical layers of your application and examine the performance characteristics of the various technologies that you have used within each layer. For example, ASP.NET in the presentation layer; Web services, Enterprise Services, and Microsoft®.NET remoting within the business layer; and Microsoft SQL Server™ within the data access layer.
Figure 4.1 shows this three-pronged approach to the review process.
Figure 4.1: The application review process
The remainder of this chapter presents the key considerations and questions to ask during the review process for each of these distinct areas, except for technologies. For more information about questions to ask for each of the technologies, see Chapter 13, "Code Review: .NET Application Performance."
Assess your deployment environment and any deployment restrictions well before deployment. The key issues that you need to consider include:
- Do you need a distributed architecture?
- What distributed communication should you use?
- Do you have frequent interaction across boundaries?
- What restrictions does your infrastructure impose?
- Do you consider network bandwidth restrictions?
- Do you share resources with other applications?
- Does your design support scaling up?
- Does your design support scaling out?
If you host your business logic on a remote server, you need to be aware of the significant performance implications of the additional overheads, such as network latency, data serialization and marshaling, and, often, additional security checks. Figure 4.2 shows the nondistributed and distributed architectures.
Figure 4.2: Nondistributed and distributed architectures
By keeping your logical layers physically close to one another, such as on the same server or even in the same process, you minimize round trips and reduce call latency. If you do use a remote application tier, make sure your design minimizes the overhead. Where possible, batch together calls that represent a single unit of work, design coarse-grained services, and cache data locally, where appropriate. For more information about these design guidelines, see Chapter 3, "Design Guidelines for Application Performance."
The following are some sample scenarios where you would opt for a remote application tier:
- You might need to add a Web front end to an existing set of business logic.
- Your Web front end and business logic might have different scaling needs. If you need to scale out only the business logic part, but both front end and business logic are on the same computer, you unnecessarily end up having multiple copies of the front end, which adds to the maintenance overhead.
- You might want to share your business logic among multiple client applications.
- The security policy of your organization might prohibit you from installing business logic on your front-end Web servers.
- Your business logic might be computationally intensive, so you want to offload the processing to a separate server.
Services are the preferred communication across application boundaries, including platform, deployment, and trust boundaries.
If you use Enterprise Services, it should be within a service implementation, or if you run into performance issues when using Web services for cross-process communication. Make sure you use Enterprise Services only if you need the additional feature set (such as object pooling, declarative, distributed transactions, role-based security, and queued components).
If you use .NET remoting, it should be for cross-application domain communication within a single process and not for cross-process or cross-server communication. The other situation where you might need to use .NET remoting is if you need to support custom wire protocols. However, understand that this customization will not port cleanly to future Microsoft implementations.
More Information
For more information, see "Prescriptive Guidance for Choosing Web Services, Enterprise Services, and .NET Remoting" in Chapter 11, "Improving Remoting Performance."
Ensure that your design places frequently interacting components that perform a single unit of work within the same boundary or as close as possible to each other. Components that are frequently interacting across boundaries can hurt performance, due to the increased overhead associated with call latency and serialization. The boundaries that you need to consider, from a performance perspective, are application domains, apartments, processes, and servers. These are arranged in ascending order of increasing cost overhead.
Target environments are often rigidly defined, and your application design needs to accommodate the imposed restrictions. Identify and assess any restrictions imposed by your deployment infrastructure, such as protocol restrictions and firewalls. Consider the following:
Do you use internal firewalls?
Is there an internal firewall between your Web server and other remote servers? This limits your choice of technology on the remote server and the related communication protocols that you can use. Figure 4.3 shows an internal firewall.
Figure 4.3: Internal and external firewalls
If your remote server hosts business logic and your internal firewall opens only port 80, you can use HTTP and Web services for remote communication. This requires Internet Information Server (IIS) on your application server.
If your remote server runs SQL Server, you need to open TCP port 1433 (or an alternative port, as configured in SQL Server) on the internal firewall. When using distributed transactions involving the database, you also have to open the necessary ports for the Distributed Transaction Coordinator (DTC). For more information about configuring DCOM to use a specific port range, see "Enterprise Services (COM+) Security Considerations" in Chapter 17, "Securing Your Application Server," in "Improving Web Application Security: Threats and Countermeasures" on MSDN® at https://msdn.microsoft.com/en-us/library/ms994921.aspx.
Do you use SSL for your ASP.NET application?
If your ASP.NET application uses SSL, consider the following guidelines:
- Keep page sizes as small as possible and minimize your use of graphics. Evaluate the use of view state and server controls for your Web pages. Both of these tend to have a significant impact on page size. To find out whether your page sizes are appropriate for your scenario, you should conduct tests at the targeted bandwidth levels. For more information, see Chapter 16, "Testing .NET Application Performance."
- Use client-side validation to reduce round trips. For security reasons, you should also use server-side validation. Client-side validation is easily bypassed.
- Partition your secure and nonsecure pages to avoid the SSL overhead for anonymous pages.
Consider the following questions in relation to the available network bandwidth in your particular deployment environment:
Do you know your network bandwidth?
To identify whether you are constrained by network bandwidth, you need to evaluate the size of the average request and response, and multiply it by the expected concurrent load of users. The total figure should be considerably lower than the available network bandwidth.
If you expect network congestion or bandwidth to be an issue, you should carefully evaluate your communication strategy and implement various design patterns to help reduce network traffic by making chunkier calls. For example, do the following to reduce network round trips:
- Use wrapper objects with coarse-grained interfaces to encapsulate and coordinate the functionality of one or more business objects that have not been designed for efficient remote access.
- Wrap and return the data that you need by returning an object by value in a single remote call.
- Batch your work. For example, you can batch SQL queries and execute them as a batch in SQL Server.
For more information, see "Minimize the Amount of Data Sent Across the Wire" in Chapter 3, "Design Guidelines for Application Performance."
Have you considered the client bandwidth?
Make sure you know the minimum bandwidth that clients are likely to use to access your application. With low bandwidth connections, network latency accounts for the major part of your application's response time. The following design recommendations help address this issue:
- Minimize your page size and use of graphics. Measure page sizes and evaluate performance by using a variety of bandwidths.
- Minimize the iterations required to complete an operation.
- Minimize the use of view state. For more information, see "View State" in Chapter 6, "Improving ASP.NET Performance."
- Use client-side validation (in addition to server-side validation) to help reduce round trips.
- Retrieve only the data you need. If you need to display a large amount of information to the user, implement a data paging technique.
- Enable HTTP 1.1 compression. By default, IIS uses the GZIP and DEFLATE HTTP compression methods. Both compression methods are implemented through an ISAPI filter. For more information about enabling HTTP compression, review the IIS documentation. You can find more information about the GZIP File Format Specification (RFC 1952) and DEFLATE Compressed Data Format Specification (RFC 1951) at http://www.ietf.org/.
If your application is hosted by an Internet Service Provider (ISP) or runs in another hosted environment, your application shares resources — such as processor, memory, and disk space — with other applications. You need to identify the resource utilization restrictions. For example, how much CPU is your application allowed to consume?
Knowing your resource restrictions can help you during the early design and prototyping stage of your application development.
You scale up by adding resources, such as processors and RAM, to your existing servers to support increased capacity. While scaling up is usually simpler than scaling out, there are pitfalls there as well. For example, you can fail to take advantage of multiple CPUs.
You scale out by adding more servers and by using load balancing and clustering solutions to spread the workload. This approach also provides protection against some hardware failures because, if one server goes down, another takes over. A common scaling strategy is to start by scaling up, and then scaling out if it is required.
To support a scale-out strategy, you need to avoid certain pitfalls in your application design. To help ensure that your application can be scaled out, review the following questions:
- Does your design use logical layers?
- Does your design consider the impact of resource affinity?
- Does your design support load balancing?
You use logical layers, such as the presentation, application, and database layers, to group related and frequently interacting components. You should strive for logical partitioning and a design where interfaces act as a contract between layers. This makes it easier for you to relocate your functionality; for example, if you need to relocate computationally intensive business logic to another server. Failure to apply logical layering results in monolithic applications that are difficult to maintain, enhance, and scale. Maintenance and enhancement is a problem, because it becomes difficult to gauge the effect of change in one component on the remaining components in your application.
**Note **Logical layering does not necessarily mean that you will have physical partitioning and multiple tiers when you deploy your application.
Resource affinity means that your application logic is heavily dependent on a particular resource for the successful completion of an operation. The resources could range from hardware resources, such as CPU, memory, disk, or network, to other dependencies, such as database connections and Web service connections.
Load balancing is an essential technique for the majority of Internet-facing Web applications. When you design an ASP.NET application, you have a number of options. You can use network load balancing to divide traffic between the multiple servers in a Web farm. You can also use load balancing at your application tier by using COM+ component load balancing or network load balancing; for example, if you use .NET remoting with the HTTP channel. Consider the following:
Have you considered the impact of server affinity on scalability goals?
Designs that most commonly cause server affinity are those that associate session state or data caches with a specific server. Affinity can improve performance in certain scale-up scenarios. However, it limits the effectiveness when scaling out. You can still scale out by using sticky sessions so that each client keeps coming to the same server it was connected to before. The limitation is that, instead of "per request," load balancing works on a "per client" basis; this is less effective in some scenarios. Avoid server affinity by designing appropriate state and caching mechanisms.
Do you use in-process session state?
In-process state is stored in the hosting Web server process. Out-of-process state moves the storage to a separate dedicated shared resource that can be shared between several processes and servers.
You cannot necessarily switch from using in-process state to out-of-process state simply by changing your Web.config file configuration. For example, your application might store objects that cannot be serialized. You need to design and plan for scaling out by considering the impact of round trips, as well as making all your types stored in the session serializable. For more information, see "State Management" later in this chapter.
Do you use a read-write cache?
A read-write cache is a server cache that is updated with user input. This cache should serve only as a means to serving Web pages faster, and should not be necessary to successfully process a request. For more information, see "Caching" later in this chapter.
Coupling refers to the number and type of links (at design or at run time) that exist between parts of a system. Cohesion measures how many different components take advantage of shared processing and data. A design goal should be to ensure that your application is constructed in a modular fashion, and that it contains a set of highly cohesive components that are loosely coupled.
The coupling and cohesion issues that you need to consider are highlighted in Table 4.1.
Table 4.1: Coupling and Cohesion Issues
Issues | Implications |
---|---|
Not using logical layers | Mixing functionally different logic (such as presentation and business) without clear, logical partitioning limits scalability options. |
Object-based communication across boundaries | Chatty interfaces lead to multiple round trips. |
For more information about the questions and issues raised in this section, see "Coupling and Cohesion" in Chapter 3, "Design Guidelines for Application Performance."
Use the following questions to assess coupling and cohesion within your design:
- Is your design loosely coupled?
- How cohesive is your design?
- Do you use late binding?
Loose coupling helps to provide implementation independence and versioning independence. A tightly coupled system is more difficult to maintain and scale. Techniques that encourage loose coupling include the following:
- Interface-based programming. The interfaces define the methods that encapsulate business logic complexity.
- Statelessness. The data sent in a single call by the client is sufficient to complete a logical operation; as a result, there is no need to persist state across calls.
Review your design to ensure that logically related entities, such as classes and methods, are appropriately grouped together. For example, check that your classes contain a logically related set of methods. Check that your assemblies contain logically related classes. Weak cohesion can lead to increased round trips because classes are not bundled logically and may end up residing in different physical tiers.
Noncohesive designs often require a mixture of local and remote calls to complete an operation. This can be avoided if the logically related methods are kept close to each other and do not require a complex sequence of interaction between various components. Consider the following guidelines for high cohesion:
- Partition your application in logical layers.
- Organize components in such a way that the classes that contribute to performing a particular logical operation are kept together in a component.
- Ensure that the public interfaces exposed by an object perform a single coherent operation on the data owned by the object.
Review your design to ensure that, if you use late binding, you do so for the appropriate reasons and where you really need to. For example, it might be appropriate to load an object based on configuration information maintained in a configuration file. For example, a database-agnostic data access layer might load different objects, depending on the currently configured database.
If you do use late binding, be aware of the performance implications. Late binding internally uses reflection, which should be avoided in performance-critical code. Late binding defers type identification until run time and requires extra processing. Some examples of late binding include using Activator.CreateInstance to load a library at run time, or using Type.InvokeMember to invoke a method on a class.
Increased communication across boundaries decreases performance. Common design issues with communication include choosing inappropriate transport mechanisms, protocols, or formatters; using remote calls more than necessary; and passing more data across remote boundaries than you really need to. For more information about the questions and issues raised in this section, see "Communication" in Chapter 3, "Design Guidelines for Application Performance."
The main communication issues that you need to consider are highlighted in Table 4.2.
Table 4.2: Communication Issues
Issues | Implications |
---|---|
Chatty interfaces | Requires multiple round trips to perform a single operation. |
Sending more data than you need | By sending more data than is required, you increase serialization overhead and network latency. |
Ignoring boundary costs | Boundary costs include security checks, thread switches, and serialization. |
To assess how efficient your communication is, review the following questions:
- Do you use chatty interfaces?
- Do you make remote calls?
- How do you exchange data with a remote server?
- Do you have secure communication requirements?
- Do you use message queues?
- Do you make long-running calls?
- Could you use application domains instead of processes?
Chatty interfaces require multiple round trips to perform a single operation. They result in increased processing overhead, additional authentication and authorization, increased serialization overhead, and increased network latency. The exact cost depends on the type of boundary that the call crosses and the amount and type of data passed on the call.
To help reduce the chattiness of your interfaces, wrap chatty components with an object that implements a chunky interface. It is the wrapper that coordinates the business objects. This encapsulates all the complexity of the business logic layer and exposes a set of aggregated methods that helps reduce round trips. Apply this approach to COM interop in addition to remote method calls.
Making multiple remote calls incurs network utilization as well as increased processing overhead. Consider the following guidelines to help reduce round trips:
- For ASP.NET applications, use client-side validation to reduce round trips to the server. For security reasons, also use server-side validation.
- Implement client-side caching to reduce round trips. Because the client is caching data, the server needs to consider implementing data validation before initiating a transaction with the client. The server can then validate whether the client is working with a stale version of the data, which would be unsuitable for this type of transaction.
- Batch your work to reduce round trips. When you batch your work, you may have to deal with partial success or partial failure. If you do not design for this, you may have a batch that is too big, which can result in deadlocks, or you may have an entire batch rejected because one of the parts is out of order.
Sending data over the wire incurs serialization costs as well as network utilization. Inefficient serialization and sending more data than necessary are common causes of performance problems. Consider the following:
Do you use .NET remoting?
If you use .NET remoting, the BinaryFormatter reduces the size of data sent over the wire. The BinaryFormatter creates a binary format that is smaller in comparison to the SOAP format created by the SoapFormatter.
Use the [NonSerialized] attribute to mark any private or public data member that you do not want to be serialized.
For more information, see Chapter 11, "Improving Remoting Performance."
Do you use ADO.NET DataSets?
If you serialize DataSets or other ADO.NET objects, carefully evaluate whether you really need to send them across the network. Be aware that they are serialized as XML even if you use the BinaryFormatter.
Consider the following design options if you need to pass ADO.NET objects over the wire in performance-critical applications:
- Consider implementing custom serialization so that you can serialize the ADO.NET objects by using a binary format. This is particularly important when performance is critical and the size of the objects passed is large.
- Consider using the DataSetSurrogate class for the binary serialization of DataSets.
For more information, see Chapter 12, "Improving ADO.NET Performance."
Do you use Web services?
Web Services uses the XmlSerializer to serialize data. XML is transmitted as plain text, which is larger than a binary representation. Carefully evaluate the parameters and payload size for your Web service parameters. Make sure the average size of the request and response payload, multiplied by the expected number of concurrent users, is well within your network bandwidth limitations.
Make sure you mark any public member that does not need to be serialized with the [XmlIgnore] attribute. There are other design considerations that help you to reduce the size of data transmitted over the wire:
- Prefer the data-centric, message-style design for your Web services. With this approach, the message acts as a data contract between the Web service and its clients. The message contains all of the information required to complete a logical operation.
- Use the document/literal encoding format for your Web services because the payload size is significantly reduced in comparison to the document/encoded or RPC/encoded formats.
- If you need to pass binary attachments, consider using Base64 encoding or, if you use Web Services Enhancements (WSE) at the client and server, consider using Direct Internet Message Encapsulation (DIME). The client can also have an implementation other than WSE that supports DIME format. For more information, see "Using Web Services Enhancements to Send SOAP Messages with Attachments" on MSDN at https://msdn.microsoft.com/en-us/library/ms996944.aspx.
For more information about Web services, see Chapter 10, "Improving Web Services Performance."
More Information
For more information, see the following resources:
- For more information about DataSetSurrogate, see Knowledge Base article 829740, "Improving DataSet Serialization and Remoting Performance," at https://support.microsoft.com/default.aspx?scid=kb;en-us;829740.
- For more information about measuring serialization overhead, see Chapter 15, "Measuring .NET Application Performance."
- For more information about improving serialization performance, see "How To: Improve Serialization Performance" in the "How To" section of this guide.
If it is important to ensure the confidentiality and integrity of your data, you need to use encryption and keyed hashing techniques; they both have an inevitable impact on performance. However, you can minimize the performance overhead by using the correct algorithms and key sizes. Consider the following:
Do you use the right encryption algorithm and key size?
Depending on how sensitive the data is and how much security you need, you can use techniques ranging from simple encoding solutions to strong encryption. If you use encryption, where possible (when both parties are known in advance), use symmetric encryption instead of asymmetric encryption. Asymmetric encryption provides improved security but has a greater negative impact on performance. A common approach is to use asymmetric only to exchange a secret key and then to use symmetric encryption.
More Information
For more information, see "Cryptography" in Chapter 7, "Building Secure Assemblies," of Improving Web Application Security: Threats and Countermeasures on MSDN at https://msdn.microsoft.com/en-us/library/ms994921.aspx.
Using message queues allows you to queue work for a component without blocking for results. Message queues are particularly useful to decouple the front- and back-end components in a system and to improve system reliability. When processing is complete, the server can post results back to a client-side message queue, where each message can be identified and reconciled with a unique message ID. If necessary, you can use a dedicated background process on the client to process message responses.
To use a component-based programming model with message queuing, consider using Enterprise Services Queued Components.
A long-running call can be any type of work that takes a long time to complete. Long is a relative term. Usually, long-running calls result from calling Web services, a remote database server, or a remote component. For a server application, long-running calls may end up blocking the worker, I/O threads, or both, depending on the implementation logic.
The following designs help you avoid the impact of blocking with long-running calls:
- Use message queues. If the client requires a success indicator or results from the server process later on, use a client-side queue.
- If the client does not need any data from the server, consider the [OneWay] attribute. With this "fire and forget" model, the client issues the call and then continues without waiting for results.
- If a client makes a long-running call and cannot proceed without the results, consider asynchronous invocation. For server applications, asynchronous invocation allows the worker thread that invokes the call to continue and perform additional processing before retrieving the results. In most of the scenarios, if the results are not available at this point, the worker thread blocks until the results are returned.
More Information
For more information about handling long-running calls from ASP.NET, see "How To: Submit and Poll for Long-Running Tasks" in the "How To" section of this guide.
For more information about using the OneWay attribute with Web services, see "One - Way (Fire-and-Forget) Communication" in Chapter 10, "Improving Web Services Performance."
Cross-application domain communication is considerably faster than interprocess communication (IPC). Some scenarios where multiple application domains would be appropriate include the following:
- Your application spawns a copy of itself often.
- Your application spends a lot of time in IPC with local programs that work exclusively with your application.
- Your application opens and closes other programs to perform work.
While cross-application domain communication is far faster than IPC, the cost of starting and closing an application domain can actually be more expensive. There are other limitations; for example, a fatal error in one application domain could potentially bring the entire process down and there could be resource limitation when all application domains share the same limited virtual memory space of the process.
Use the questions in this section to assess how well your design minimizes contention and maximizes concurrency.
The main concurrency issues that you need to consider are highlighted in Table 4.3.
Table 4.3: Concurrency Issues
Issues | Implications |
---|---|
Blocking calls | Stalls the application, and reduces response time and throughput. |
Nongranular locks | Stalls the application, and leads to queued requests and timeouts. |
Misusing threads | Additional processor and memory overhead due to context switching and thread management overhead. |
Holding onto locks longer than necessary | Causes increased contention and reduced concurrency. |
Inappropriate isolation levels | Poor choice of isolation levels results in contention, long wait time, timeouts, and deadlocks. |
To assess concurrency issues, review the following questions:
- Do you need to execute tasks concurrently?
- Do you create threads on a per-request basis?
- Do you design thread safe types by default?
- Do you use fine-grained locks?
- Do you acquire late and release early?
- Do you use the appropriate synchronization primitive?
- Do you use an appropriate transaction isolation level?
- Does your design consider asynchronous execution?
Concurrent execution tends to be most suitable for tasks that are independent of each other. You do not benefit from asynchronous implementation if the work is CPU bound (especially for single processor servers) instead of I/O-bound. If the work is CPU bound, an asynchronous implementation results in increased utilization and thread switching on an already busy processor. This is likely to hurt performance and throughput.
Consider using asynchronous invocation when the client can execute parallel tasks that are I/O-bound as part of the unit of work. For example, you can use an asynchronous call to a Web service to free up the executing thread to do some parallel work before blocking on the Web service call and waiting for the results.
Review your design and ensure that you use the thread pool. Using the thread pool increases the probability for the processor to find a thread in a ready to run state (for processing), which results in increased parallelism among the threads.
Threads are shared resources and are expensive to initialize and manage. If you create threads on a per-request basis in a server-side application, this affects scalability by increasing the likelihood of thread starvation and affects performance, due to the increased overhead of thread creation, processor context switching, and garbage collection.
Avoid making types thread safe by default. Thread safety adds an additional layer of complexity and overhead to your types, which is often unnecessary if synchronization issues are dealt with by a higher-level layer of software.
Evaluate the tradeoff between having coarse-grained and fine-grained locks. Fine - grained locks ensure atomic execution of a small amount of code. When used properly, they provide greater concurrency by reducing lock contention. When used at the wrong places, the fine-grained locks may add complexity and decrease performance and concurrency.
Acquiring late and releasing shared resources early is the key to reducing contention. You lock a shared resource by locking all the code paths accessing the resource. Make sure to minimize the duration that you hold and lock on these code paths, because most resources tend to be shared and limited. The faster you release the lock, the earlier the resource becomes available to other threads.
The correct approach is to determine the optimum granularity of locking for your scenario:
- Method level synchronization. It is appropriate to synchronize at the method level when all that the method does is act on the resource that needs synchronized access.
- Synchronizing access to relevant piece of code. If a method needs to validate parameters and perform other operations beyond accessing a resource that requires serialized access, you should consider locking only the relevant lines of code that access the resource. This helps to reduce contention and improve concurrency.
Using the appropriate synchronization primitive helps reduce contention for resources. There may be scenarios where you need to signal other waiting threads either manually or automatically, based on the trigger of an event. Other scenarios vary by the frequency of read and write updates. Some of the guidelines that help you choose the appropriate synchronization primitive for your scenario are the following:
- Use Mutex for interprocess communication.
- Use AutoResetEvent and ManualResetEvent for event signaling.
- Use System.Threading.InterLocked for synchronized increments and decrements on integers and longs.
- Use ReaderWriterLock for multiple concurrent reads. When the write operation takes place, it is exclusive because all other read and write threads are queued up.
- Use lock when you do want to allow one reader or writer acting on the object at a time.
When considering units of work (size of transactions), you need to think about what your isolation level should be and what locking will be required to provide that isolation level and, therefore, what your risk of deadlocks and deadlock-based retrys are. You need to select appropriate isolation levels for your transactions to ensure that data integrity is preserved without unduly affecting application performance.
Selecting an isolation level higher than you need means that you lock objects in the database for longer periods of time and increase contention for those objects. Selecting an isolation level that is too low increases the probability of losing data integrity by causing dirty reads or writes.
If you are unsure of the appropriate isolation level for your database, you should use the default implementation, which is designed to work well in most scenarios.
**Note **You can selectively lower the isolation level used in specific queries, rather than changing it for the entire database.
For more information, see Chapter 14, "Improving SQL Server Performance."
Asynchronous execution of work allows the main processing thread to offload the work to other threads, so it can continue to do some additional processing before retrieving the results of the asynchronous call, if they are required.
Scenarios that require I/O-bound work, such as file operations and calls to Web services, are potentially long-running and may block on the I/O or worker threads, depending on the implementation logic used for completing the operation. When considering asynchronous execution, evaluate the following questions:
Are you designing a Windows Forms application?
Windows Forms applications executing an I/O call, such as a call to a Web service or a file I/O operation, should generally use asynchronous execution to keep the user interface responsive. The .NET Framework provides support for asynchronous operations in all the classes related to I/O activities, except in ADO.NET.
Are you designing a server application?
Server applications should use asynchronous execution whenever the work is I/O–bound, such as calling Web services if the application is able to perform some useful work when the executing worker thread is freed.
You can free up the worker thread completely by submitting work and polling for results from the client at regular intervals. For more information about how to do this, see "How To: Submit and Poll for Long-Running Tasks" in the "How To" section of this guide.
Other approaches include freeing up the worker thread partially to do some useful work before blocking for the results. These approaches use Mutex derivates such as WaitHandle.
For server applications, you should not call the database asynchronously, because ADO.NET does not have support for such operations and it requires the use of delegates that run on worker threads for processing. You might as well block on the original thread rather than using another worker thread to complete the operation.
Doyou use the asynchronous design pattern?
The .NET Framework provides a design pattern for asynchronous communication. The advantage is that it is the caller that decides whether a particular call should be asynchronous. It is not necessary for the callee to expose plumbing for asynchronous invocation. Other advantages include type safety.
For more information, see "Asynchronous Design Pattern Overview" in the .NET Framework Developer's Guide on MSDN at: https://msdn.microsoft.com/en-us/library/aa719595(VS.71).aspx.
More Information
For more information about the questions and issues raised in this section, see "Concurrency" in Chapter 3, "Design Guidelines for Application Performance."
Common resource management issues include failing to release and pool resources in a timely manner and failing to use caching, which leads to excessive resource access. For more information about the questions and issues raised in this section, see "Resource Management" in Chapter 3, "Design Guidelines for Application Performance."
The main resource management issues that you need to consider are highlighted in Table 4.4.
Table 4.4: Resource Management Issues
Issues | Implications |
---|---|
Not pooling costly resources | Can result in creating many instances of the resources along with its connection overhead. Increase in overhead cost affects the response time of the application. |
Holding onto shared resources | Not releasing (or delaying the release of) shared resources, such as connections, leads to resource drain on the server and limits scalability. |
Accessing or updating large amounts of data | Retrieving large amounts of data from the resource increases the time taken to service the request, as well as network latency. This should be avoided, especially on low bandwidth access, because it affects response time. Increase in time spent on the server also affects response time as concurrent users increase. |
Not cleaning up properly | Leads to resource shortages and increased memory consumption; both of these affect scalability. |
Failing to consider how to throttle resources | Large numbers of clients can cause resource starvation and overload the server. |
To assess the efficiency of your application's resource management, review the following questions:
- Does your design accommodate pooling?
- Do you acquire late and release early?
Identify resources that incur lengthy initialization and make sure that you use pooling, where possible, to efficiently share them among multiple clients. Resources suitable for pooling include threads, network connections, I/O buffers, and objects.
As a general guideline, create and initialize pools at application startup. Make sure that your client code releases the pooled object as soon as it finishes with the resource. Consider the following:
Do you use Enterprise Services?
Consider object pooling for custom objects that are expensive to create. Object pooling lets you configure and optimize the maximum and minimum size of the object pool. For more information, see "Object Pooling" in Chapter 8, "Improving Enterprise Services Performance."
Do you treat threads as shared resources?
Use the .NET thread pool, where possible, instead of creating threads on a per - request basis. By default, the thread pool is self-tuning and you should change its defaults only if you have specific requirements. For more information about when and how to configure the thread pool, see "Threading Explained" in Chapter 6, "Improving ASP.NET Performance" and "Threading" in Chapter 10, "Improving Web Services Performance."
Do you use database connection pooling?
You should connect to the database by using a single trusted identity. Avoid impersonating the original caller and using that identity to access the database. By using a trusted subsystem approach instead of impersonation, it enables you to use connection pooling efficiently.
More Information
For more information about connection pooling, see "Connections" in Chapter 12, "Improving ADO.NET Performance."
Minimize the duration that you hold onto a resource. When you work with a shared resource, the faster you release it, the faster it becomes available for other users. For example, you should acquire a lock on a resource just before you need to perform the actual operation, rather than holding onto it in the pre-processing stage. This helps reduce contention for the shared resources.
Assess your application's approach to caching and identify where, when, and how your application caches data. Review your design to see if you have missed opportunities for caching. Caching is one of the best known techniques for improving performance.
The main caching issues that you need to consider are highlighted in Table 4.5.
Table 4.5: Caching Issues
Issues | Implications |
---|---|
Not using caching when you can | Round trips to data store for every single user request, increased load on the data store. |
Updating your cache more frequently than you need to | Increased client response time, reduced throughput, and increased server resource utilization. |
Caching the inappropriate form of data | Increased memory consumption, resulting in reduced performance, cache misses, and increased data store access. |
Caching volatile or user - specific data | Frequently changing data requires frequent expiration of cache, resulting in excess usage of CPU, memory, and network resources. |
Holding cache data for prolonged periods | With inappropriate expiration policies or scavenging mechanisms, your application serves stale data. |
Not having a cache synchronization mechanism in Web farm | This means that the cache in the servers in the farm is not the same and can lead to improper functional behavior of the application. |
To assess how effectively your application uses caching, review the following questions:
- Do you cache data?
- Do you know which data to cache?
- Do you cache volatile data?
- Have you chosen the right cache location?
- What is your expiration policy?
Do you make expensive lookups on a per-request basis? If you operate on data that is expensive to retrieve, compute, or render, it is probably a good candidate for caching. Identify areas in your application that might benefit from caching.
Identify opportunities for caching early during your application's design. Avoid considering caching only in the later stages of the development cycle as an emergency measure to increase performance.
Prepare a list of data suitable for caching throughout the various layers of your application. If you do not identify candidate data for caching up front, you can easily generate excessive redundant traffic and perform more work than is necessary.
Potential candidates for caching include the following:
- Relatively static Web pages. You can cache pages that do not change frequently by using the output cache feature of ASP.NET. Consider using user controls to contain the static portions of a page. This enables you to benefit from ASP.NET fragment caching.
- Specific items of output data. You can cache data that needs to be displayed to users in the ASP.NET Cache class.
- Stored procedure parameters and query results. You can cache frequently used query parameters and query results. This is usually done in the data access layer to reduce the number of round trips to the database. Caching partial results helps dynamic pages generate a wide set of output (such as menus and controls) from a small set of cached results.
Do you know the frequency at which data is modified? Use this information to decide whether to cache the data. You should also be aware of how out-of-date the data you display can be, with respect to the source data. You should be aware of the permissible time limit for which the stale data can be displayed, even when the data has been updated in its source location.
Ideally, you should cache data that is relatively static over a period of time, and data that does not need to change for each user. However, even if your data is quite volatile and changes, for example, every two minutes, you can still benefit from caching. For example, if you usually expect to receive requests from 20 clients in a 2-minute interval, you can save 20 round trips to the server by caching the data.
To determine whether caching particular sets of data is beneficial, you should measure performance both with and without caching.
Make sure you cache data at a location where it saves the most processing and round trips. It also needs to be a location that supports the lifetime you require for the cached items. You can cache data at various layers in your application. Review the following layer-by-layer considerations:
Do you cache data in your presentation layer?
You should cache data in the presentation layer that needs to be displayed to the user. For example, you can cache the information that is displayed in a stock ticker. You should generally avoid caching per-user data, unless the user base is very small and the total size of the data cache does not require too much memory. However, if users tend to be active for a while and then go away again, caching per-user data for short time periods may be the appropriate approach. This depends on your caching policy.
Doyou cache data in your business layer?
Cache data in the business layer if you need it to process requests from the presentation layer. For example, you can cache the input parameters to a stored procedure in a collection.
Do you cache data in your database?
You can consider caching data in temporary tables in a database if you need it for lengthy periods. It is useful to cache data in a database when it takes a long time to process the queries to get a result set. The result set may be very large in size, so it would be prohibitive to send the data over the wire to be stored in other layers. For a large amount of data, implement a paging mechanism that enables the user to retrieve the cached data a chunk at a time. You also need to consider the expiration policy for data when the source data is updated.
Do you know the format in which the cached data will be used?
Prefer caching data in its most ready state so that it does not need any additional processing or transformations. For example, you can cache a whole Web page by using output caching. This significantly reduces the ASP.NET processing overhead on your Web server.
Do you write to the cache?
If you write user updates to a cache before updating them in a persistent database, this creates server affinity. This is problematic if your application is deployed in a Web farm, because the request from a particular client is tied to a particular server, due to localized cache updates.
To avoid this situation, you should update cached data only to further improve performance and not if it is required for successful request processing. In this way, requests can still be successfully served by other servers in the same cluster in a Web farm.
Consider using a session state store for user-specific data updates.
An inappropriate expiration policy may result in frequent invalidation of the cached data, which negates the benefits of caching. Consider the following while choosing an expiration mechanism:
How often is the cached information allowed to be wrong?
Keep in mind that every piece of cached data is already potentially stale. Knowing the answer to this question helps you evaluate the most appropriate absolute or sliding expiration algorithms.
Is there any dependency whose change invalidates the cached data?
You need to evaluate dependency-based algorithms. For example, the ASP.NET Cache class allows data expiration if changes are made to a particular file. Note that, in some scenarios, it might be acceptable to display data that is a little old.
Is the lifetime dependent upon how frequently the data is used?
If the answer is yes, you need to evaluate the least recently used or least frequently used algorithms.
Do you repopulate caches for frequently changing data?
If your data changes frequently, it may or may not be a good candidate for caching. Evaluate the performance benefits of caching against the cost of building the cache. Caching frequently changing data can be an excellent idea if slightly stale data is good enough.
Have you implemented a caching solution that takes time to load?
If you need to maintain a large cache and the cache takes a long time to build, consider using a background thread to build the cache or build up the cache incrementally over time. When the current cache expires, you can then swap out the current cache with the updated cache you built in the background. Otherwise, you may block client requests while they wait for the cache to update.
More Information
For more information, see the following resources:
- For more information about the questions raised in this section, see "Caching" in Chapter 3, "Design Guidelines for Application Performance."
- For more information and guidelines about caching, see the Caching Architecture Guide for .NET Framework Applications on MSDN at https://msdn.microsoft.com/en-us/library/ms978498.aspx.
- For middle-tier caching solutions, consider the Caching Application Block included with Enterprise Library," on MSDN.
The main state management issues that you need to consider are highlighted in Table 4.6.
Table 4.6: State Management Issues
Issues | Implications |
---|---|
Stateful components | Holds server resources and can cause server affinity, which reduces scalability options. |
Use of an in-memory state store | Limits scalability due to server affinity. |
Storing state in the database or server when the client is a better choice | Increased server resource utilization; limited server scalability. |
Storing state on the server when a database is a better choice | In-process and local state stored on the Web server limits the ability of the Web application to run in a Web farm. Large amounts of state maintained in memory also create memory pressure on the server. |
Storing more state than you need | Increased server resource utilization, and increased time for state storage and retrieval. |
Prolonged sessions | Inappropriate timeout values result in sessions consuming and holding server resources for longer than necessary. |
For more information about the questions raised in this section, see "State Management" in Chapter 3, "Design Guidelines for Application Performance."
To assess the state management efficiency, review the following questions:
- Do you use stateless components?
- Do you use .NET remoting?
- Do you use Web services?
- Do you use Enterprise Services?
- Have you ensured objects to be stored in session stores are serializable?
- Do you depend on view state?
- Do you know the number of concurrent sessions and average session data per user?
Carefully consider whether you need stateful components. A stateless design is generally preferred because it offers greater options for scalability. Some of the key considerations are the following:
What are the scalability requirements for your application?
If you need to be able to locate your business components on a remote clustered middle tier, you may either need to plan for stateless components, or store state on a different server that is accessible by all of the servers in your middle-tier cluster.
If you do not have such scalability requirements, stateful components in certain scenarios help improve performance, because the state need not be transmitted by the client over the wire or retrieved from a remote database.
How do you manage state in stateless components?
If you design for stateless components and need to abstract state management, you need to know the lifetime requirements and size of the state data. If you opt for stateless components, some options for state management are the following:
- Passing state from the client on each component call. This method is efficient if multiple calls are not required to complete a single logical operation and if the amount of data is relatively small. This is ideal if the state is mostly needed to process requests and can be disposed of once the processing is complete.
- Storing state in a database. This approach is appropriate if the operation spans multiple calls such that transmitting state from the client would be inefficient, the state is to be accessed by multiple clients, or both.
.NET remoting supports server-activated objects (SAOs) and client-activated objects (CAOs). If you have specific scalability requirements and need to plan for a load-balanced environment, you should prefer single call SAOs. These objects retain state only for the duration of a single request.
Singleton SAOs may be stateful or stateless and can rehydrate state from various mediums, depending on requirements. Use these when you need to provide synchronized access to a particular resource.
If your scalability objectives enable you to use a single server, you can evaluate the use of CAOs, which are stateful objects. A client-activated object can be accessed only by the particular instance of the client that created it. Hence, they are capable of storing state across calls.
For more information, see "Design Considerations" in Chapter 11, "Improving Remoting Performance."
Stateful Web services signify a RPC or distributed object design. With RPC-style design, a single logical operation can span multiple calls. This type of design often increases round trips and usually requires that state is persisted across multiple calls.
A message-based approach is usually preferable for Web services. With this approach, the payload serves as a data contract between the client and the server. The client passes the payload to the server. This contains sufficient information to complete a single unit of work. This generally does not require any state to be persisted across calls, and, as a result, this design can be easily scaled out across multiple servers.
For more information, see "State Management" in Chapter 10, "Improving Web Services Performance."
If you plan to use Enterprise Services object pooling, you need to design stateless components. This is because the objects need to be recycled across various requests from different clients. Storing state for a particular client means the object cannot be shared across clients.
For more information, see "State Management" in Chapter 8, "Improving Enterprise Services Performance."
To store objects in an out-of-process session state store, such as a state service or SQL Server, the objects must be serializable. You do not need serializable objects to store objects in the in-process state store, but you should bear this in mind in case you need to move your session state out-of-process.
You enable a class to be serialized by using the Serializable attribute. Make sure that you use the NonSerialized attribute to avoid any unnecessary serialization.
If you use or plan to use view state to maintain state across calls, you should prototype and carefully evaluate the performance impact. Consider the total page size and the bandwidth requirements to satisfy your response time goals.
Persisting large amounts of data from server controls, such as the DataGrid, significantly increases page size and delays response times. Use tracing in ASP.NET to find out the exact view state size for each server control or for a whole page.
More Information
For more information about improving view state efficiency, see "View State" in Chapter 6, "Improving ASP.NET Performance."
Knowing the number of concurrent sessions and the average session data per user enables you to decide the session store. If the total amount of session data accounts for a significant portion of the memory allocated for the ASP.NET worker process, you should consider an out-of-process store.
Using an out-of-process state store increases network round trips and serialization costs, so this needs to be evaluated. Storing many custom objects in session state or storing a lot of small values increases overhead. Consider combining the values in a type before adding them to the session store.
The main data structure issues that you need to consider are highlighted in Table 4.7.
Table 4.7: Data Structure Issues
Issues | Implications |
---|---|
Choosing a collection without evaluating your needs (size, adding, deleting, updating) | Reduced efficiency; overly complex code. |
Using the wrong collection for a given task | Reduced efficiency; overly complex code. |
Excessive type conversion | Passing value type to reference type causing boxing and unboxing overhead, causing performance hit. |
Inefficient lookups | Complete scan of all the content in the data structure, resulting in slow performance. |
Not measuring the cost of your data structures or algorithms in your actual scenarios | Undetected bottlenecks due to inefficient code, |
For more information, see "Data Structures and Algorithms" in Chapter 3, "Design Guidelines for Application Performance."
Consider the following questions to assess your data structure and algorithm design:
- Do you use appropriate data structures?
- Do you need custom collections?
- Do you need to extend IEnumerable for your custom collections?
Choosing the wrong data structure for your task can hurt performance because specific data structures are designed and optimized for particular tasks. For example, if you need to store and pass value types across a physical boundary, rather than using a collection, you can use a simple array, which avoids the boxing overhead.
Clearly define your requirements for a data structure before choosing one. For example, do you need to sort data, search for data, or access elements by index?
More Information
For more information about choosing the appropriate data structure, see "Collection Guidelines" in Chapter 5, "Improving Managed Code Performance" and "Selecting a Collection Class" in the .NET Framework Developer's Guide on MSDN at https://msdn.microsoft.com/en-us/library/6tc79sx1.aspx.
For most scenarios, the collections provided by .NET Framework are sufficient, although on occasion, you might need to develop a custom collection. Carefully investigate the supplied collection classes before developing your own. The main reasons for wanting to develop your own custom collection include the following:
- You need to marshal a collection by reference rather than by value, which is the default behavior of collections provided by .NET Framework.
- You need a strongly typed collection.
- You need to customize the serialization behavior of a collection.
- You need to optimize on the cost of enumeration.
If you are developing a custom collection and need to frequently enumerate through the collection, you should extend the IEnumerable interface to minimize the cost of enumeration.
For more information, see "Collections Explained" in Chapter 5, "Improving Managed Code Performance."
The main data access issues that you need to consider are highlighted in Table 4.8.
Table 4.8: Data Access Issues
Issues | Implications |
---|---|
Poor schema design | Increased database server processing; reduced throughput. |
Failure to page large result sets | Increased network bandwidth consumption; delayed response times; increased client and server load. |
Exposing inefficient object hierarchies when simpler would do | Increased garbage collection overhead; increased processing effort required. |
Inefficient queries or fetching all the data | Inefficient queries or fetching all the data to display a portion is an unnecessary cost, in terms of server resources and performance. |
Poor indexes or stale index statistics | Creates unnecessary load on the database server. |
Failure to evaluate the processing cost on your database server and your application | Failure to meet performance objectives and exceeding budget allocations. |
For more information about the questions and issues raised by this section, see Chapter 12, "Improving ADO.NET Performance." Consider the following:
- How do you pass data between layers?
- Do you use stored procedures?
- Do you process only the required data?
- Do you need to page through data?
- Do your transactions span multiple data stores?
- Do you manipulate BLOBs?
- Are you consolidating repeated data access code?
Review your approach for passing data between the layers of your application. In addition to raw performance, the main considerations are usability, maintainability, and programmability. Consider the following:
Have you considered client requirements?
Focus on the client requirements and avoid transmitting data in one form and forcing the client to convert it to another. If the client requires the data just for display purposes, simple collections, such as arrays or an Arraylist object, are suitable because they support data binding.
Do you transform the data?
If you need to transform data, avoid multiple transformations as the data flows through your application.
Can you logically group data?
For logical groupings, such as the attributes that describe an employee, consider using a custom class or struct type, which are efficient to serialize. Use the NonSerializable attribute on any field you do not need to serialize.
Is cross-platform interoperability a design goal?
If so, you should use XML, although you need to consider performance issues including memory requirements and the significant parsing effort required to process large XML strings.
Do you use DataSet objects?
If your client needs to be able to view the data in multiple ways, update data on the server using optimistic concurrency, and handle complex relationships between various sets of data, a DataSet is well suited to these requirements. DataSets are expensive to create and serialize, and they have large memory footprints. If you do need a disconnected cache and the rich functionality supported by the DataSet object, have you considered a strongly typed DataSet, which offers marginally quicker field access?
Using stored procedures is preferable in most scenarios. They generally provide improved performance in comparison to dynamic SQL statements. From a security standpoint, you need to consider the potential for SQL injection and authorization. Both approaches, if poorly written, are susceptible to SQL injection. Database authorization is often easier to manage with stored procedures because you can restrict your application's service accounts to executing specific stored procedures and prevent them from accessing tables directly.
If you use stored procedures, consider the following:
- Try to avoid recompiles. For more information about how recompiles are caused, see Microsoft Knowledge Base article 243586, "INF: Troubleshooting Stored Procedure Recompilation," at https://support.microsoft.com/default.aspx?scid=kb;en-us;243586.
- Use the Parameters collection; otherwise you are still susceptible to SQL injection.
- Avoid building dynamic SQL within the stored procedure.
- Avoid mixing business logic in your stored procedures.
If you use dynamic SQL, consider the following:
- Use the Parameters collection to help prevent SQL injection.
- Batch statements if possible.
- Consider maintainability (for example, updating resource files versus statements in code).
When using stored procedures, consider the following guidelines to maximize their performance:
- Analyze your schema to see if it is well suited to perform the updates needed or the searches. Does your schema support your unit of work? Do you have the appropriate indexes? Do your queries take advantage of your schema design?
- Look at your execution plans and costs. Logical I/O is often an excellent indicator of the overall query cost on a loaded server.
- Where possible, use output parameters instead of returning a result set that contains single rows. This avoids the performance overhead associated with creating the result set on the server.
- Evaluate your stored procedure to ensure that there are no frequent recompilations for multiple code paths. Instead of having multiple if else statements for your stored procedure, consider splitting it into multiple small stored procedures and calling them from a single stored procedure.
Review your design to ensure you do not retrieve more data (columns or rows) than is required. Identify opportunities for paging records to reduce network traffic and server loading. When you update records, make sure you update only the changes instead of the entire set of data.
Paging through data requires transmitting data from database to the presentation layer and displaying it to the user. Paging through a large number of records may be costly if you send more than the required data over the wire, which may add to the network, memory, and processing costs on presentation and database tiers. Consider the following guidelines to develop a solution for paging through records:
- If the data is not very large and needs to be served to multiple clients, consider sending the data in a single iteration and caching it on the client side. You can page through the data without making round trips to the server. Make sure you use an appropriate data expiration policy.
- If the data to be served is based on user input and can potentially be large, consider sending only the most relevant rows to the client for each page size. Use the SELECT TOP statement and the TABLE data type in your SQL queries to develop this type of solution.
- If the data to be served consists of a large result set and is the same for all users, consider using global temporary tables to create and cache the data once, and then send the relevant rows to each client as they need it. This approach is most useful if you need to execute long-running queries spanning multiple tables to build the result set. If you need to fetch data only from a single table, the advantages of a temporary table are minimized.
More Information
For more information, see "How To: Page Records in .NET Applications" in the "How To" section of this guide.
If you have transactions spanning multiple data stores, you should consider using distributed transactions provided by the Enterprise Services. Enterprise Services uses the DTC to enforce transactions.
The DTC performs the inter-data source communication, and ensures that either all of the data is committed or none of the data is committed. This comes at an operational cost. If you do not have transactions that span multiple data sources, Transact-SQL (T-SQL) or ADO.NET manual transactions offer better performance. However, you need to trade the performance benefits against ease of programming. Declarative Enterprise Services transactions offer a simple component-based programming model.
If you need to read or write BLOB data such as images, you should first consider the options of storing them directly on a hard disk and storing the physical path or the URL in the database. This reduces load on the database. If you do read or write to BLOBs, one of the most inefficient ways is to perform the operation in a single call. This results in the whole of the BLOB being transferred over the wire and stored in memory. This can cause network congestion and memory pressure, particularly when there is a considerable load of concurrent users.
If you do need to store BLOB data in the database, consider the following options to reduce the performance cost:
- Use chunking to reduce the amount of data transferred over the wire. Chunking involves more round trips, but it places comparatively less load on the server and consumes less network bandwidth. You can use the DataReader.GetBytes to read the data in chunks or use SQL Server-specific commands, such as READTEXT and UPDATEDTEXT, to perform such chunking operations.
- Avoid moving the BLOB repeatedly because the cost of moving them around can be significant in terms of server and network resources. Consider caching the BLOB on the client side after a read operation.
If you have many classes that perform data access, you should think about consolidating repeated functionality into helper classes. Developers with varying levels of expertise and data access knowledge may unexpectedly take inconsistent approaches to data access, and inadvertently introduce performance and scalability issues.
By consolidating critical data access code, you can focus your tuning efforts and have a single consistent approach to database connection management and data access.
More Information
For more information, see the following resources:
- For more information about a best practices data access code, see the Data Access Application Block included with Enterprise Library on MSDN."
- For more information about best practice data access, see the .NET Data Access Architecture Guide on MSDN at https://msdn.microsoft.com/en-us/library/ms978510.aspx.
The main exception handling issues that you need to consider are highlighted in Table 4.9.
Table 4.9: Exception Handling Issues
Issues | Implications |
---|---|
Poor client code validations | Round trips to servers and expensive calls |
Exceptions as a method of controlling regular application flow | Expensive compared to returning enumeration or Boolean values. |
Throwing and catching too many exceptions | Increased inefficiency |
Catching exceptions unnecessarily | Adds to performance overhead and can conceal information unnecessarily. |
To assess the efficiency of your approach to exception handling, review the following questions:
- Do you use exceptions to control application flow?
- Are exception handling boundaries well defined?
- Do you use error codes?
- Do you catch exceptions only when required?
You should not use exceptions to control the application flow because throwing exceptions is expensive. Some alternatives include the following:
Change the API so it communicates its success or failure by returning a bool value as shown in the following code.
// BAD WAY // ... search for Product if ( dr.Read() ==0 ) // no record found, ask to create{ //this is an example of throwing an unnecessary exception because //nothing has gone wrong and it is a perfectly acceptable situation throw( new Exception("User Account Not found")); } // GOOD WAY // ... search for Product if ( dr.Read() ==0 ){ // no record found, ask to create return false; }
Refactor your code to include validation logic to avoid exceptions instead of throwing exceptions.
More Information
For more information, see the following resources:
- For more information about using exceptions, see "Design Guidelines for Exceptions" on MSDN at https://msdn.microsoft.com/en-us/library/ms229014(VS.80).aspx.
You should catch, wrap, and rethrow exceptions in predictable locations. Exception handling should be implemented using a common set of exception handling techniques per application layer. Well defined exception handling boundaries help to avoid redundancy and inconsistency in the way exceptions are caught and handled, and help maintain an appropriate level of abstraction of the error. Avoiding redundant exception handling helps application performance and can help simplify the instrumentation information an operator receives from the application.
It is common to set exception management boundaries around components that access external resources or services, and around facades that external systems or user interface logic may access.
Generally, you should avoid using method return codes to indicate error conditions. Instead, you should use structured exception handling. Using exceptions is much more expressive, results in more robust code, and is less prone to abuse than error codes as return values.
The common language runtime (CLR) internally uses exceptions even in the unmanaged portions of the engine. However, the performance overhead associated with exceptions should be factored into your decision. You can return a simple bool value to inform the caller of the result of the function call.
Catching exceptions and rethrowing them is expensive, and makes it harder to debug and identify the exact source code that was responsible for the exception. Do not catch exceptions unless you specifically want to record and log the exception details, or can retry a failed operation. If you do not do anything with the exception, it is likely that you end up rethrowing the same exception. Consider the following guidelines for catching exceptions:
You should not arbitrarily catch exceptions unless you can add some value. You should let the exception propagate up the call stack to a handler that can perform some appropriate processing.
Do not swallow any exceptions that you do not know how to handle. For example, do not swallow exceptions in your catch block as shown in the following code.
catch(Exception e){ //Do nothing }
For more information about the questions and issues raised in this section, see "Exception Management" in Chapter 5, "Improving Managed Code Performance."
Use the following questions to help review your class design:
- Does your class own the data that it acts upon?
- Do your classes expose interfaces?
- Do your classes contain virtual methods?
- Do your classes contain methods that take variable parameters?
Review your class designs to ensure that individual classes group related data and behavior together appropriately. A class should have most of the data that it needs for processing purposes and should not be excessively reliant on other child classes. Too much reliance on other classes can quickly lead to inefficient round trips.
Generally, you should use an implicit interface-based approach in a class by wrapping functionality and exposing a single API (method) capable of performing a unit of work. This avoids the cost of unnecessary virtual table hops.
Use explicit interfaces only when you need to support multiple versions or when you need to define common functionality applicable to multiple class implementations (that is, for polymorphism).
Review the way you use virtual members in your classes. If you do not need to extend your class, avoid using them because, for .NET Framework 1.1, calling a virtual method involves a virtual table lookup. As a result, virtual methods are not inlined by the compiler because the final destination cannot be known at design time.
Use only virtual members to provide extensibility to your class. If you derive from a class that has virtual members, you can mark the derived class methods with the sealed keyword, which results in the method being invoked as a nonvirtual method. This stops the chain of virtual overrides.
Consider the following example.
public class MyClass{
protected virtual void SomeMethod() { ... }
}
You can override and seal the method in a derived class as follows.
public class DerivedClass : MyClass {
protected override sealed void SomeMethod () { ... }
}
This code ends the chain of virtual overrides and makes DerivedClass.SomeMethod a candidate for inlining.
Methods with a variable number of parameters result in special code paths for each possible combination of parameters. If you have high performance objects, you could use overloaded methods with varying parameters rather than having a sensitive method that takes a variable number of parameters.
More Information
For more information about methods with variable numbers, see the "Methods With Variable Numbers of Arguments" section of "Method Usage Guidelines" in the .NET Framework General Reference on MSDN at https://msdn.microsoft.com/en-us/library/xxfyae0c(VS.71).aspx.
Architecture and design reviews for performance should be a regular part of your application development life cycle. The performance characteristics of your application are determined by its architecture and design. No amount of fine-tuning and optimization can make up for poor design decisions that fundamentally prevent your application from achieving its performance and scalability objectives.
This chapter has presented a process and a set of questions that you should use to help you perform reviews. Apply this review guidance to new and existing designs.
For more information, see the following resources:
- For a printable checklist, see "Checklist: Architecture and Design Review for Performance and Scalability" in the "Checklists" section of this guide.
- For a question-driven approach to reviewing code and implementation from a performance perspective, see Chapter 13, "Code Review: .NET Application Performance."
- For information about how to assess whether your software architecture will meet its performance objectives, see PASA: An Architectural Approach to Fixing Software Performance Problems, by Lloyd G. Williams and Connie U. Smith, at http://www.perfeng.com/papers/pasafix.pdf.
- For information about patterns, see Enterprise Solution Patterns Using Microsoft .NET on MSDN at https://msdn.microsoft.com/en-us/library/ms998469.aspx.
- For more information about application architecture, see Application Architecture for .NET: Designing Applications and Services on MSDN at https://msdn.microsoft.com/en-us/library/ms954595.aspx.
Retired Content |
---|
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. |