Analyzing Performance

   

Visual Studio Analyzer makes it easier to analyze the performance of your application by gathering data from all the components and systems automatically and presenting it to you. You can:

  • Choose how you want to view the data (see Understanding Visual Studio Analyzer Views for more information).

  • Focus on what happened and when.

  • Optionally collect and record Windows NT Performance Monitor data. If you choose to save an event log containing Performance Monitor data, you can have a snapshot of Performance Monitor data at a particular point in time, correlated with events in your application.

Comparing Visual Studio Analyzer to Code Profiling Tools

Visual Studio Analyzer provides information similar to that of profiling tools but at a very different scope. Code profiling tools, such as the Microsoft Source Profiler, are normally used to understand exactly where a component or single-process application is spending its execution time. These tools usually produce reports identifying the percentage of time spent in specific functions or at specific lines in the source code. Visual Studio Analyzer performance analysis is designed to show you a coarse-grained report of where time was spent based on the information flow of the application. You see performance information at the Visual Studio Analyzer event level rather than at the function or source code line level; time is reported for Visual Studio Analyzer event pairings, such as function call and return, rather than for individual components.

Using Visual Studio Analyzer for the Right Purpose

Visual Studio Analyzer's strength is in providing a high-level look at performance analysis, across all tiers and systems, by focusing on the interactions between the components of an application and allowing traditional profiling tools to provide performance analysis within those components.

Although you can use Visual Studio Analyzer to manually instrument your code and analyze the results, thus performing functions similar to a traditional profiling tool, Visual Studio Analyzer is not intended for this use. Using it this way has two important drawbacks:

  • You must instrument your code manually. Profiling tools instrument your code automatically.

  • Visual Studio Analyzer's network and disk usage are likely to overwhelm that of the application or system being analyzed.

Using Visual Studio Analyzer to Isolate a Performance Bottleneck

You can use Visual Studio Analyzer to identify and analyze slow parts of your application. For example, suppose your COM-based, data-driven application appears to have a performance problem. You notice that it seems to be running slower than you expect. Here's how you could use Visual Studio Analyzer to help you locate the problem:

  1. Begin by creating a new Visual Studio Analyzer project and connecting to the machines running the application. Create a new filter and open the filter in the Filter Editor.

    For more information about projects and filters, see Understanding Visual Studio Analyzer Projects and Understanding Visual Studio Analyzer Filters.

  2. For the filter, select the machines where the application is running, COM and ADO (or whatever data access technology your application uses) as the components, and basic COM and database-related events, such as object creation, function call, and query. The exact choices would of course depend on your application architecture and what the performance currently looks like. You might also use a predefined filter; the idea is to collect some useful data that might shed light on the problem.

  3. After defining your filter, set it as the recording filter and create a new event log, and then start recording events into the event log. You could open the Event List view because it provides the most comprehensive information, and watch as the collected events are displayed.

    For more information about event logs and views, see Understanding Visual Studio Analyzer Event Logs and Understanding Visual Studio Analyzer Views.

  4. When your application is finished, stop recording events and open the Chart view to see timing information for the collected events. By selecting different nodes in the event tree and watching the Gantt chart, you could see which events are taking the most time. You might notice that COM events are taking a long time relative to other events and determine they are taking so long because the database queries are running slowly. The Gantt chart will show the amount of time spent in the database query. If that percentage seems high, it indicates a potential problem.

Advanced Performance Analysis Using Data from Performance Monitor

In step 4 of the previous example, you might have determined that database queries were running slowly. Here's how you could use Visual Studio Analyzer and Performance Monitor data together to find out why:

  1. Begin by creating a new event log and by opening up the filter you created previously in the Filter Editor. Select the dynamic events category to add performance counters from the machines running the COM and database servers. These counters would include CPU utilization, paging activity, and SQL statistics.

  2. Set the revised filter as the recording filter and start recording events into the new event log. Run your application as before and stop recording events when the application finishes.

  3. Open the Chart view again to see the performance problems, but this time you might also overlay a line graph on the viewer and add the CPU events to it. The line graph would then show you the CPU events over time.

  4. You might select one of the events that shows poor database performance and notice how the line graph highlights the time sequence associated with that event. You see that the CPU load for the system at that point in time is quite high compared to the CPU load at other times.

  5. Next you might add a line graph showing paging activity data, which might show you heavy paging activity. Then add a line graph showing the SQL data, which could reveal that the complexity of the query is causing significant database activity. Knowing this, you could return to the source code and look for ways to optimize the query.