Reporting Services Performance and Optimization

As of September 1, 2013 we decided to remove SQLCAT.COM site and use MSDN as the primary vehicle to post new SQL Server content. This was done to minimize reader confusion and to streamline content publication. MSDN SQLCAT blogs already includes most SQLCAT.COM Content and will continue to be updated with more SQLCAT learnings. You can also find a collection of our work in SQLCAT Guidance eBooks.

To make the transition to MSDN smoother, we are in the process of reposting a few of SQLCAT.COM’s blogs that are still being very actively viewed.  Following is reposting of one of the SQLCAT.Com blogs that still draws huge interest. You can find this, and other guidance in the SQLCAT's Guide to BI and Analytics eBook.

Also, Follow us on Twitter as we normally use our Twitter handles @SQLCAT and @WinAzureCAT to announce news and new content.  

 

Reporting Services Performance Optimization

Authors: Denny Lee, Lukasz Pawlowski
Contributors: Robert Bruckner, James Wu, John Gallardo, Dean Kalanquin

Introduction

This technical note provides guidance for Reporting Services in both Microsoft® SQL Server® 2005 and SQL Server 2008. The focus of this technical note is to optimize your Reporting Services architecture for better performance and higher report execution throughput and user loads.

 

Architecture

Figure 1 represents a typical scale-out Reporting Services environment; as noted by the red box, the focus of this technical note is that of Performance Optimization of your scale-out deployment.

image

Figure 1: Reporting Services Scale-Out Architecture

 

Should You Use 64-Bit?

Yes.

Why, you ask? The answer can be divided into two different sections – how 64-bit helps your report server catalog performance and how it helps your report server performance.

How 64-bit Helps Your Report Server Catalog Performance

Remember that your report server catalogs are SQL Server databases, so your standard database techniques for optimizing SQL Server databases come into play here. Since SQL Server 2005, the database has been written natively for 64-bit and it is able to make use of the additional addressable memory.

How 64-bit Helps Your Report Server Service

For your report servers, the answer is slightly more complicated. As a general statement, most of your Reporting Services reports are memory-intensive, and the additional addressable memory made available by 64-bit will provide you with more scale. Note that for some workloads, 32-bit can execute faster, especially in scenarios where you have many small reports. But if you have more memory made available by 64-bit, you can handle more concurrent report users. Because there is less memory contention, report throughput will be higher; that is, you can allow more users to view and export larger reports. In SQL Server 2005 Reporting Services, each report’s data set is placed into memory; the more concurrent reports, the more memory used. If you use 32-bit, you can easily hit the 3 GB ceiling, which can also result in Internet Information Services (IIS) process recycles, leading to report failure.

But SQL Server 2008 Reporting Services is not memory-bound. It is able to effectively use the file system in order to move data structures in and out of memory and the file system if the report server experiences memory pressure. These memory limits are explicitly configurable in SQL Server 2008 Reporting Services via RSReportServer.config, as noted in the Memory Configurations for SQL Server 2008 Reporting Services section below. When Reporting Services uses the file system, the reports run more slowly, because it is much more efficient to request data from the memory than from disk. The file system is used only if Reporting Services memory usage gets close to the configured memory limits. If you overload the report server in SQL Server 2008 Reporting Services with a large number of concurrent users and/or very large reports, all of your reports can still complete processing, albeit more slowly, because Reporting Services can hold all of this data without running out of memory space. In enterprise environments, you will eventually run into situations where your servers will need to be able to handle many concurrent users and a large load – the optimization in SQL Server 2008 Reporting Services (in comparison to SQL Server 2005 Reporting Services) is that while the reports may run slower at times, they will complete.

Exceptions

Keep in mind that certain data providers are not available for 64-bit (for example, the Microsoft JET provider or certain third-party providers). In these cases, customers will need to continue using 32-bit for their Reporting Services environment.

 

Handling a Large Workload

As noted in the previous section, the two main issues concerning enterprise reporting environments are the ability to handle concurrent user load and the ability to handle a large workload (that is, large reports). To help mitigate the concurrency issue, your solution is to scale out to multiple report servers to handle the user query load.

To get the highest performance when handling large workloads that include user requests for large reports, implement the following recommendations.

Control the Size of Your Reports

You will first want to determine the purpose of these reports and whether a large multipage report is even necessary. If a large report is necessary, how frequently will it be used? If you provide users with smaller summary reports, can you reduce the frequency with which users attempt to access this large multipage report? Large reports have a significant processing load on the report server, the report server catalog, and report data, so it is necessary to evaluate each report on a case-by-case basis.

Some common problems with these large reports are that they contain data fields that are not used in the report or they contain duplicate datasets. Often users retrieve more data than they really need. To significantly reduce the load placed on your Reporting Services environment, create summary reports that use aggregates created at the data source, and include only the necessary columns. If you want to provide data feeds, you can do this asynchronously using more appropriate tools, such as SQL Server Integration Services, to provide the file data feed.

Use Cache Execution

If you have reports that do not need to have live execution, enable the cache execution setting for each of your appropriate reports. This setting causes the report server to cache a temporary copy of those reports in memory.

Configure and Schedule Your Reports

For your large reports, use the Report Execution Timeouts setting to control how long a report can execute before it times out. Some reports simply need a long time to run, so timeouts will not help you there, but if reports are based on bad or runaway queries, execution timeouts ensure that resources are not being inappropriately utilized.

If you have large reports that create data processing bottlenecks, you can mitigate resource contention issues by using Scheduled Snapshots. Instead of the report data itself, a regularly scheduled report execution snapshot is used to render the report. The scheduled snapshot can be executed during off-peak hours, leaving more resources available for live report users during peak hours.

Deliver Rendered Reports for Nonbrowser Formats

The rendering performance of nonbrowser formats such as PDF and XLS has improved in SQL Server 2008 Reporting Services. Nevertheless, to reduce the load on your SQL Server Reporting Services environment, you can place nonbrowser format reports onto a file share and/or SharePoint® team services, so users can access the file directly instead of continually regenerating the report.

Prepopulate the Report Cache by Using Data-Driven Subscriptions for Parameterized Reports

For your large parameterized reports, you can improve performance by prepopulating the report cache using data-driven subscriptions. Data-driven subscriptions enable easier population of the cache for set combinations of parameter values that are frequently used when the parameterized report is executed. Note that if you choose a set of parameters that are not used, you take on the cost of running the cache with little value in return. Therefore, to identify the more frequent parameter value combinations, analyze the ExecutionLog2 view as explained below. Ultimately, when a user opens a report, the report server can now use a cached copy of the report instead of creating the report on demand. You can schedule and populate the report cache by using data-driven subscriptions.

Back to the Report Catalogs

You can also increase the sizes of your report server catalogs, which allows the databases to store more of the snapshot data.

Tuning the Web Service

IIS and Http.sys tuning helps get the last incremental performance out of the report server computer. The low-level options allow you to change the length of the HTTP request queue, the duration that connections are kept alive, and so on. For large concurrent reporting loads, it may be necessary to change these settings to allow your server computer to accept enough requests to fully utilize the server resources.

You should consider this only if your servers are at maximum load and you do not see full resource utilization or if you experience connection failures to the Reporting Services process. To do this:

 

Monitoring by Using ExecutionLog2

The Reporting Services ExecutionLog2 view is a good starting point from which to analyze your current workloads and understand its dataset size, performance, and complexity characteristics. For more information, see Robert Bruckner’s blog, which provides extensive details on the ExecutionLog2 view (https://blogs.msdn.com/robertbruckner/archive/2009/01/05/executionlog2-view.aspx). For more information about query and reporting on report execution log data, see SQL Server Books Online (https://msdn.microsoft.com/en-us/library/ms155836.aspx).

In particular, this view contains a new AdditionalInfo column. ExecutionLog2.AdditionalInfo contains information related to the size of memory-pressure responding data structures. One way this information can be useful is to check whether you have reports with high values (10s, or 100s of MBs) – these reports might be candidates for further review, focusing on the design of those reports and the dataset query sizes.

Below are some tips on how to view the ExecutionLog2 view to quickly understand potential performance bottlenecks, which creates summary and detail Reporting Services reports on the last 1,000 entries into the ExecutionLog2 view, with the sorting options noted below.

image

Figure 2: Review Execution Logs (ExecutionLog2) Summary Report

image

Figure 3: Review Execution Logs (ExecutionLog2) Details Report

Long-Running?

Sorting by ElapsedSec or RowCount helps you identify long-running reports. If the value for TimeDataRetrieval is high, the data source is your bottleneck, and you may want to optimize. If the there is a high value for RowCount, a lot of data is being retrieved and aggregated by Reporting Services – perhaps have your data source do this to reduce the load on your report server.

Subscriptions or Interactive?

Sorting by the RequestType field allows you to determine whether you have a lot of subscriptions; you can then determine the bottlenecks and stagger-schedule the reports (that is, schedule the subscription execution times of the reports at different times).

Live Data or Snapshots?

Sorting by the Source field allows you to determine whether your reports are typically live data or snapshots. If the reports can be snapshots (for example, yesterday’s report), create snapshots so you can avoid query execution, report processing, and report rending.

Load Balanced?

Sorting by the Instance field can help you see whether your network load balancer is handling report requests in a balanced fashion. You can also see if some nodes are down or not processing requests.

Discover Report Patterns

Sorting by ReportPath and TimeStart can help you to find interesting report patterns – for example, an expensive report that takes 5 minutes to run is executed every 10 minutes.

Report Health

You can sort by status to determine if you have a high number of failures that occurred before (for example, incorrect RDL) or after (for example, subscription delivery error) the report is processed. This can also help you identify reports where there is outdated information or settings (for example, expired data source passwords or missing subreports).

In addition, if ScalabilityTime > 0, Reporting Services is in scale mode, which means that it is under heavy memory pressure and will start pushing long-running reports to the file system to ensure enough memory to complete smaller query requests. If this happens frequently, consider trying one of the following:

  • Reduce the size of the dataset.
  • Simplify the grouping, filtering, or sorting logic in the report to use less memory.
  • Add more memory.
  • Add more servers to better handle the load.

 

Data-Driven Subscriptions

Based on all of this information, you can then create your own data-driven subscriptions that can sort, filter, and track your issues. For example, you can create a subscription that alerts you if Errors > 5%.

 

Memory Configurations for SQL Server 2008 Reporting Services

As alluded to above in the 64-bit section, memory in SQL Server 2008 Reporting Services is used more efficiently at the data structure level. Under intensive workload pressures, it uses a file system cache in order to reduce memory utilization. By being more efficient and writing to disk, it can process more report executions (even large reports) successfully. The administrator uses settings to determine when SQL Server 2008 Reporting Services starts to use the file system cache and how aggressively the cache is used. The administrator should consider configuring memory settings for the SQL Server 2008 Reporting Services to optimally use their computer resources. Fine-tuning these settings can provide an increase in performance under intensive workload over the default configuration.

For more information, see SQL Server 2008 Books Online: Configuring Available Memory for Report Server Applications. The key memory configurations are:

  • WorkingSetMinimum: This is the minimum amount of memory that Reporting Services makes available in order for it to perform its task; that is, it does not use the file system cache if SQL Server Reporting Services process memory utilization is below this limit. It is configured as a KB value within the RSReportServer.config file. After this threshold is hit, Reporting Services responds to memory pressure by having long-running requests use the file system cache and smaller queries use memory.

For many concurrent users, you can potentially increase this configuration value after it hits peak performance, enabling more process requests to be completed within memory instead of the file system.

 

 

  • WorkingSetMaximum: This is the maximum amount of physical memory Reporting Services will use; it is also configured as a KB value within the RSReportServer.config file. After this threshold is hit and exceeded for a period of time, Reporting Services recycles the app domains to reduce memory utilization. This will ensure that there is enough memory left for the operating system to complete its task. You can increase this value if you want to process more reports concurrently.

 

  • MemorySafetyMargin and MemoryThreshold: These values define how aggressively the file system cache is used. MemorySafetyMargin defines the boundary between low-pressure and medium-pressure scenarios, with a default value of 80%. MemoryThreshold defines the boundary between medium-pressure and high-pressure scenarios, with a default value of 90%. Both are configured within the RSReportServer.config file.

In general, if you are constantly hitting memory thresholds, it is recommended that you consider scaling up (adding more memory, altering these configuration values) and then scaling out. You should scale up first because resources are better managed in SQL Server 2008 Reporting Services.

 

Memory Configurations for SQL Server 2005 Reporting Services

You can scale up by adding more memory as well. But the impact may not be as dramatic; one scenario might see a doubling of memory and CPU (to 16 GB RAM and 8 cores) with only 1/3 increase in capacity. Nevertheless, there are things you can still do to scale up SQL Server 2005 Reporting Services prior to scaling out.

As with SQL Server 2008 Reporting Services, you can use memory configurations to address memory threshold issues. There are two primary memory configurations in SQL Server 2005 Reporting Services that we recommend you change if you’re constantly hitting memory thresholds:

  • MemoryLimit: This configuration is similar to WorkingSetMinimum in SQL Server 2008. Its default is 60% of physical memory. Increasing the value helps Reporting Services handle more requests. After this threshold is hit, no new requests are accepted.
  • MaximumMemoryLimit: This configuration is similar to WorkingSetMaximum in SQL Server 2008. Its default is 80% of physical memory. But unlike the SQL Server 2008 version, when its threshold is met, it starts aborting processes instead of rejecting new requests.

While we do recommend memory configuration changes if you’re constantly hitting memory thresholds, note that changing these values might create or exacerbate other resource contentions.

 

Conclusion

This concludes our four-part “Building and Deploying Large Scale SQL Server Reporting Services Environments” technical note series. We hope that this set of technical notes will help you design, manage, and maintain your enterprise SQL Server Reporting Services environments.

Comments

  • Anonymous
    November 04, 2013
    Very better. Thanks

  • Anonymous
    November 24, 2013
    I am working to scale out an SSRS 2008 R2 infrastructure and this post provides many helpful pieces of information. One question I do have relates to authentication during report generation. I am seeing a large number of authentication calls, e.g. Logon and Special Logon entries in the Security Event Log, per report. In one case 45 for a report with 7 data sources. This causes measurable to significant LSASS.exe CPU usage. For example, one batch process sends 1,200 report requests to SSRS in a matter of minutes. Under that load I see LSASS.exe CPU utilization around 20-25% of total capacity on a 4-core VM. Anyone else experience something like this?

  • Anonymous
    January 02, 2014
    This was very helpful, practical and well stated.

  • Anonymous
    May 09, 2014
    Would love to see an update of this article with regard to 2012 (and 2014).

  • Anonymous
    August 12, 2014
    Hi, Your article is very nice. I too have to optimize rdl files . But I don't have access to Report Server Database. So, can't check anything by firing query on Report Server.  CAn you please help me, what points i need to consider for .rdl file optimization. What are the best practices.  Most of the rdl files are having some Calculationsor IF/Else Statemet to display values, Groupings are there. Please suggest. I am new to SSRS optimization Thank you

  • Anonymous
    October 10, 2014
    The link to Robert Buckner's blog no longer works. For more information, see Robert Bruckner’s blog, which provides extensive details on the ExecutionLog2 view (blogs.msdn.com/.../executionlog2-view.aspx).

  • Anonymous
    October 10, 2014
    Here is the correct link to the Robert Bruckner blog post regarding ExecutionLog2 blogs.msdn.com/.../executionlog2-view.aspx

  • Anonymous
    December 07, 2014
    If Report runs very slow as usual,then we have to look at the log?

  • Anonymous
    March 31, 2015
    Excellent Article.

  • Anonymous
    December 29, 2015
    The comment has been removed