SQL Server 2014 now released by SAP for SAP NetWeaver 7.xx

Last year we already released a series of articles on this blog site where we introduced new features and showed advantages of SQL Server 2014 in conjunction with SAP NetWeaver applications. We now can state that as of 03/11/2015, SAP released support for SQL Server 2014 for NetWeaver based applications. This release marks a milestone since from now on, customers can upgrade existing SAP NetWeaver based installations from SQL Server 2008, 2008R2 and SQL Server 2012 to SQL Server 2014. Also new installations of SAP NetWeaver based applications directly against SQL Server 2014 are supported by SAP. Predecessor releases of SQL Server 2014 already proved that most challenging SAP workload and business processes can be run using SQL Server in conjunction with inexpensive industry standard hardware. Functionality like SQL Server AlwaysOn became the kind of standard functionality to provide business continuity for most demanding availability requirements of SAP systems. SQL Server 2014 continues delivering features and functionality for SAP workloads to provide improved scale, throughput, performance and reliability. A new area where SQL Server 2014 provides leading functionality is the integration into Microsoft Azure Virtual Machine functionality. These features enable SAP customers to reduce TCO by leveraging Azure Virtual Machine Services as Infrastructure instead of investing into storage and server infrastructure themselves.

Let’s summarize again what the advantages of SQL Server 2014 are for SAP applications.

Features SAP customer will benefit from

SQL Server in-memory optimized columnstore

One of the most significant features SQL Server 2014 provides to SAP NetWeaver application is the introduction of a modifiable version of SQL Server’s in-memory-optimized columnstore. The integration of this new functionality into BW will vastly extend the BW workload scenarios which benefit from SQL Server’s in-memory optimized columnstore capabilities. Advantages in terms of storing data factors more efficient or enabling factors more throughput scanning massive amounts of data are the most impressive results provided by SQL Server in-memory optimized columnstore. These advantages materialize in scenarios where customers speed up existing SAP BW scenarios dramatically w/o the need of additional infrastructure investments or replace older acceleration technology with a simpler and faster configuration of SAP BW. The new modifiable in-memory optimized SQL Server columnstore technology also enabled us to adapt a new SAP BW cube format. It is actually the cube format SAP pioneered with SAP BW running on HANA. Experiences made with this new cube format showed an additional significant reduction of query response time compared to the traditional SAP BW cube format. The new SAP BW cube format is available with SAP BW 7.40 SP8 in conjunction with SQL Server 2014. More documentation can be found here: https://blogs.msdn.com/b/saponsqlserver/archive/2014/01/16/new-functionality-in-sql-server-2014-part-1-modifiable-column-store-index.aspx

The release status of the notes will be updated in this blog: https://blogs.msdn.com/b/saponsqlserver/archive/2015/01/08/sql-server-column-store-for-sap-bw-status-and-release-planning.aspx 

and

SAP Note 2114876 - Release Planning SAP BW for SQL Server Column-Store.

Backup Encryption

Especially with leveraging public cloud scenarios customer expressed the desire for having backup encryption introduced as a SQL Server native capability. SQL Server 2014 introduces this functionality. It can be combined with SQL Server’s backup compression. Backup compression will still provide the full compression ratio while backup encryption will encrypted the compressed backup data. More details can be found here: https://blogs.msdn.com/b/saponsqlserver/archive/2014/02/15/new-functionality-in-sql-server-2014-part-5-backup-restore-enhancements.aspx

and

https://msdn.microsoft.com/en-us/library/dn449489.aspx

Backup against URL

SQL Server 2014 allows to backup database/differential/transaction log backups directly against Azure storage. This is a feature we mostly see in demand with customers who are running SQL Server in Azure Infrastructure as a Service scenarios. Instead addressing mounted drives to the server or VM as backup destination, you can define the destination as an Azure Storage Account URL. You can read more details here:

https://blogs.msdn.com/b/saponsqlserver/archive/2014/02/15/new-functionality-in-sql-server-2014-part-5-backup-restore-enhancements.aspx

and

https://msdn.microsoft.com/en-us/library/dn435916.aspx

New Cardinality Estimation

SQL Server was designed in a very modular way nearly two decades ago. Taking advantage of this modular design, SQL Server 2014 introduces a new cardinality estimation module. Cardinality estimation plays an important role in creating a plan to execute a query. One of the first, very important step in creating the plan to execute a query is to estimate the number of rows that the query and different branches within the query would return. Such an estimation is based on the index and column statistics. The estimation is important because it can decide on join order of a query or join type going to be chosen, etc. The functionality which performs this estimation is called ‘cardinality estimation’. The new cardinality estimation algorithms got introduced in order to close a few gaps the old algorithms had. More details on the differences and other new functionalities of the new Cardinality Estimation can be found here:

https://msdn.microsoft.com/en-us/library/dn673537.aspx

and https://blogs.msdn.com/b/saponsqlserver/archive/2014/01/16/new-functionality-in-sql-server-2014-part-2-new-cardinality-estimation.aspx

and

 SAP Note 1961467 - SQL Server 2014 and later - New Cardinality Estimation in Query Optimization

Deploying SAP software against SQL Server 2014 or performing the SAP post-upgrade step after upgrading to SQL Server 2014, will set a trace flag which will disable the new cardinality estimation logic and enable the old logic again. The reason we decided to go that way is rooted in the fact that the new cardinality estimation is resulting in different plans than the old logic did. This introduces the chance that queries will not always end up with a more efficient plan using the new logic, compared to the query generated with the old logic. Therefore we intended to separate the step and testing of moving to SQL Server 2014 from testing the new cardinality estimation. SAP supports both, the new and the old logic to estimate the cardinality. You as a customer can test and chose the right one for your particular workload.

Lock Priority

A new functionality that is intended to improve availability of the SQL Server instance during operational task is called ‘lock priority’. Lock priority can be used when issuing online DDL statements like ‘create index’ against tables. Despite being mostly ‘online’, these tasks could lead to queuing of modifications against a table, especially in the way SAP integrated into SQL Server. In order to resolve these issues, we introduced the construct of lock priorities, which should help to avoid such queuing of modifications when performing online DDL operations against tables. More information about this topic can be found here: https://blogs.msdn.com/b/saponsqlserver/archive/2014/01/17/new-functionality-in-sql-server-2014-part-3-low-priority-wait.aspx

Azure Integration

Besides being able to direct SQL Server 2014 backups directly against Azure Blob storage, SQL Server 2014 also offers the possibility to place SQL Server data and logfiles directly onto Azure Blob store without the definition of VHDs. This method of placing SQL Server data files directly on Azure Blobs is applicable when running SQL Server in an Azure VM (Infrastructure as a Service) and when you want to overcome eventual limitations of the Azure VM SKU you chose (limitations of number of VHDs or IOPS). Within SAP scenarios, the functionality is certainly not meant to run the SQL Server instance in the own datacenter on-premise and have the data files of databases deployed on Azure Blob Storage in an Azure datacenter. You can read more about this functionality here: https://msdn.microsoft.com/en-us/library/dn385720.aspx

AlwaysOn was changed in a way that you can run a secondary replica of an Availability Group in Azure. This functionality is making it easier to create a DR site for SAP landscapes, in this case, especially the database part. You can imagine a setup where you configure a primary replica plus a secondary replica onsite. These two replicas would synchronize the data in a synchronous manner. What is possible now is to create a third replica in Azure and add that to the AlwaysOn Availability Group of the two replicas which run on premise. Needless to say that in most cases the data synchronization between the on-premise replicas and the replica in Azure needs to be asynchronous. More to this topic can be read here: https://msdn.microsoft.com/en-us/library/dn463980.aspx

In order to find a whitepaper that describes how you could use Azure as DR site for a SAP landscape, please check out this document: https://go.microsoft.com/fwlink/?LinkID=521971

In-memory optimized versus in-memory resident

From a pure engineering point we distinguish two different cases and technologies when talking about In-Memory. Both cases can be combined or can be used separately. As cases we look into:

  • In-memory optimized
  • In-memory resident

In-memory optimized functionality: Is considered as data structures or algorithms in the database which are optimized to leverage larger memory areas where data can be processed (searched/filtered/scanned/aggregated) way faster than with the traditional data structure and algorithms DBMS vendor used for decades. A classic example are column oriented structures which are residing in large memory areas organized in very large segments. These segments combined with algorithms optimized to search through such segments provide extreme fast search capabilities through the data. Way faster than the traditional data structures which pretty much were the same for memory representation of the data and storage representation of the data. In opposite to the classic data representation, which was the same for data in memory or on disk storage, In-memory optimized data structures, represent the data differently in memory than the data is stored on disk (if the data is stored on disk). A classic example is SQL Server’s In-Memory Optimized columnstore. In that case the columnstore data is stored on disk in the usual 8K page format. However reading the data from storage, the content of the pages is transformed into large segments that are contiguously placed in memory. This allows new algorithms to scan/search/filter/aggregate that data extremely fast. Especially around columnstore the largest acceleration effect is provided by in-memory optimized data structures and optimized algorithms scanning these data structures.

In-memory resident is a capability that tells that the data needs to be stored in memory without the possibility of storing the data on disk. In order to fulfill the ACID concepts (Atomicity, Consistency, Isolation, Durability) for databases, the modifications to data as it is stored and resident in memory needs to be logged in a transaction log as before. There usually also is a requirement to derive snapshots or offset points of the data as it is in-memory resident to accelerate crash recovery of a DBMS instance. Advantage obviously is that there is no need to read data from storage. However that does not translate into not requiring fast storage anymore. Since modifications need to be logged and offset points need to be generated, fast premium storage still is required. Keeping the data just in memory in the same block/page structure as it existed before on disk storage, provides little advantages only. Data that is kept in-memory resident usually requires to be represented with in-memory optimized data structures and optimized algorithms to get a real advantage of keeping the data in-memory resident. Means the two capabilities of in memory optimized and in-memory resident get combined. A classic example in SQL Server 2014 is In-Memory OLTP. A major differentiation between databases that have in-memory resident functionality is the object granularity the in-memory resident functionality is applicable. Especially in the SAP space, our experience over the last two decades is that only a subset of the data stored in a SAP ERP or BW are really used (think about keeping years of data in the SAP BW PSA area). Therefore functionality that requires the data being in-memory resident should be best applicable to a database table and require to keep the whole database in-memory resident. Otherwise memory requirements forced by the data volumes will become extremely high and with that will limit the usability of mega-scale cloud providers or drive infrastructure costs up. Consequently SQL Server 2014 In-Memory OLTP can be applied to specific tables of a large database.

What about SQL Server 2014 In-Memory OLTP functionality with SAP

In-Memory OLTP got introduced with SQL Server 2014. This functionality combines In-memory optimized data structures and algorithms with keeping the data in-memory resident. In-memory OLTP got developed to optimize scenarios in the OLTP space with extreme high data modification rate. The functionality as shipped in SQL Server 2014 does not target analytics workload at all. More details in regards to In-Memory OLTP can be found here: https://msdn.microsoft.com/en-us/library/dn133186.aspx

or here: https://blogs.msdn.com/b/saponsqlserver/archive/2014/02/09/new-functionality-in-sql-server-2014-part5-in-memory-oltp.aspx

One of the culprits of SAP NetWeaver using SQL Server In-Memory OLTP is the fact that with the first release of In-Memory OLTP mainly Snapshot Transaction Isolation Level is used (https://blogs.technet.com/b/dataplatforminsider/archive/2013/10/01/in-memory-oltp-programmability-concurrency-and-transaction-isolation-for-memory-optimized-tables.aspx ). The upside of this transaction isolation level is that it avoids all kinds of blocking even between concurrent modifiers. However that also requires conflict detection between two transactions which read a row value and now both try to change it in parallel. Something which usually is blocked with Committed Read or Read Committed Snapshot Isolation Level used by SAP NetWeaver working against the different RDBMS. The conflict resolution by In-Memory OLTP to avoid issues with two modifiers is to allow only one open transaction modifying a specific row. If the same row should be changed by a second transaction without a commit of the changed row by the first transaction, the second transaction would be completely rolled back.

With certain SAP business transactions running in parallel batches with overlapping data, we often encounter blocking locks between modifications made by different SAP workprocesses. Means a solution which would kill transactions of concurrent modifiers are not presenting a viable way to handle such situations. Therefore the usage of In-Memory OLTP by SAP NetWeaver is not possible at this point in time.

Minimum requirements and more details

We don’t want to list all the minimum requirements from the SAP side in this blog. Instead we want to point to SAP Note 1966681 – Release planning for Microsoft SQL Server 2014

Plus SAP Notes listed as related Notes in the master note above. Please be aware that the Basis Support Packages listed as minimum requirements for the different NetWeaver releases need to be applied before moving to SQL Server 2014.

Comments

  • Anonymous
    May 06, 2015
    It nice to work with sql 2014. It  provide sap support .

  • Anonymous
    November 11, 2015
    Referring to SAP Notes 1966701 & 1966681, I'd like to get some confirmation on an SAP upgrade that we will be undertaking.   Today's SAP Environment:

  • O/S = Windows Server 2008 R2 - SQL Server = 2008 R2 (Enterprise)
  • SAP = NetWeaver 7.11
  • SAP Modules = CRM, ECC, PI, BW, Solution Manager Proposed SAP Upgrade Environment:
  • O/S = Windows Server 2012
  • SQL Server = 2014 (Enterprise)
  • SAP = NetWeaver 7.4
  • SAP Modules = CRM, ECC, PI, BW, Solution Manager Proposed Upgrade Path:
  • Build a parallel landscape (all new servers)
  • Install O/S at 2012 - Install SQL Server at 2014
  • Application will be installed at NW 7.4
  • Database will be restored from existing 2008 R2 environment (DB compatibility mode will stay at 2008 R2)
  • Perform a System Copy (becomes new SID)
  • Immedicately apply SAP Support Pack Stacks
  • Database compatibility mode will be set to 2014
  • Apply SAP Enhancement Packs Will this work?   Do you see any issues with this strategy?  Any advice would be greatly appreciated!  
  • Anonymous
    February 29, 2016
    Is the download for BPC 10.1 on SQL 2014 available?The product availability matrix is showing SQL 2014 as supported for BPC 10.1, however I just pulled down the latest installer for BPC 10.1. It did NOT include the 2014 version of the SQL_AS_ADOMD.msi installer.I also received several error messages at install:OSoftSetupHelper.log:SQL Server Version... Required: or higher Currently:12.0.2000.8Status:IncorrectOLAP Server Version... Required: or higher Currently:2014.120.2000.8Status:Incorrect