Backing up a VLDB to Azure Blob Storage
Reviewed by: Pat Schaefer, Rajesh Setlem, Xiaochen Wu, Murshed Zaman
All SQL Server versions starting from SQL Server 2012 SP1 CU2 support Backup to URL, which allows storing SQL Server backups in Azure Blob Storage. In SQL Server 2016, several improvements to Backup to URL were made, including the ability to use block blobs in addition to page blobs, and the ability to create striped backups (if using block blobs). Prior to SQL Server 2016, the maximum backup size was limited to the maximum size of a single page blob, which is 1 TB.
With striped backups to URL in SQL Server 2016, the maximum backup size can be much larger. Each block blob can grow up to 195 GB; with 64 backup devices, which is the maximum that SQL Server supports, that allows backup sizes of 195 GB * 64 = 12.19 TB.
(As an aside, the latest version of the Blob Storage REST API allows block blob sizes up to 4.75 TB, as opposed to 195 GB in the previous version of the API. However, SQL Server does not use the latest API yet.)
In a recent customer engagement, we had to back up a 4.5 TB SQL Server 2016 database to Azure Blob Storage. Backup compression was enabled, and even with a modest compression ratio of 30%, 20 stripes that we used would have been more than enough to stay within the limit of 195 GB per blob.
Unexpectedly, our initial backup attempt failed. In the SQL Server error log, the following error was logged:
Write to backup block blob device https://storageaccount.blob.core.windows.net/backup/DB\_part14.bak failed. Device has reached its limit of allowed blocks.
When we looked at the blob sizes in the backup storage container (any storage explorer tool can be used, e.g. Azure Storage Explorer), the blob referenced in the error message was slightly over 48 GB in size, which is about four times smaller than the maximum blob size of 195 GB that Backup to URL can create.
To understand what was going on, it was helpful to re-read the “About Block Blobs” section of the documentation. To quote the relevant part: “Each block can be a different size, up to a maximum of 100 MB (4 MB for requests using REST versions before 2016-05-31 [which is what SQL Server is using]), and a block blob can include up to 50,000 blocks.”
If we take the error message literally, and there is no reason why we shouldn’t, we must conclude that the referenced blob has used all 50,000 blocks. That would mean that the size of each block is 1 MB (~48 GB / 50000), not the maximum of 4 MB that SQL Server could have used with the version of REST API it currently supports.
How can we make SQL Server use larger block sizes, specifically 4 MB blocks? Fortunately, this is as simple as using the MAXTRANSFERSIZE parameter in the BACKUP DATABASE statement. For 4 MB blocks, we used the following statement:
[code]
BACKUP DATABASE … TO
URL = 'https://storageaccount.blob.core.windows.net/backup/DB_part01.bak',
…
URL = 'https://storageaccount.blob.core.windows.net/backup/DB_part20.bak',
WITH COMPRESSION, MAXTRANSFERSIZE = 4194304, BLOCKSIZE = 65536, CHECKSUM, FORMAT, STATS = 5;
This time, backup completed successfully, with each one of the 20 blobs/stripes being slightly less than 100 GB in size. We could have used fewer stripes to get closer to 195 GB blobs, but since the compression ratio was unknown in advance, we chose a safe number.
Another backup parameter that helps squeeze more data within the limited number of blob blocks is BLOCKSIZE. With the maximum BLOCKSIZE of 65536, the likelihood that some blob blocks will be less than 4 MB is reduced.
Conclusion: If using Backup to URL to create striped backups of large databases (over 48 GB per stripe), specify MAXTRANSFERSIZE = 4194304 and BLOCKSIZE = 65536 in the BACKUP statement.
Comments
- Anonymous
April 26, 2017
The comment has been removed- Anonymous
April 26, 2017
Hi Michael,This happens because you are attempting to back up to a page blob (given that you use WITH CREDENTIAL). Striped backups are only supported for block blobs. To use block blobs, you need to create a Shared Access Signature (SAS), and create a credential for the SAS. There are examples in https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url.
- Anonymous
- Anonymous
April 27, 2017
Thank you Dimitri. - Anonymous
July 21, 2017
Is there any way to add those options to backup jobs executed to Azure SQLIaaSExtention? Or just make SQL append those options to ANY backups it's doing (like server wide "backup compression" option)?Thanks- Anonymous
July 21, 2017
Sergey, starting with SQL 2016 SP1 CU2, Managed Backup (which is what SQLIaaSExtension uses) already includes MAXTRANSFERSIZE in the BACKUP statement. As you noted, COMPRESSION (and CHECKSUM) can be configured at the instance level. Unfortunately, there is no way to specify BLOCKSIZE, but then this is unlikely to make a big difference in terms of the largest possible backup size.- Anonymous
July 24, 2017
Thanks Dimitri,I've modified SPROCs responsible for backup that "SQLIaaSExtention" using with both BLOCKSIZE and MAXTRANSFERSIZE and verified it's all working. And doesn't seem to be overridden by updates to SQLIaaSExtention. I've posted a a question on https://www.yammer.com/azureadvisors/#/Threads/show?threadId=927055085 though to pick public brain on whether something can override those SPROCs automatically breaking my change. Do you know in what circumstances that would occur if any? (except CU/SP installs). ?- Anonymous
July 24, 2017
Modifying system stored procedures is not supported. In this particular case, while it may be unlikely that anything would overwrite your changes, no one can give a guarantee that it won't happen (for example, rebuilding MSDB as a part of some troubleshooting effort will revert this change). Installing SP1 CU2 or later, where MAXTRANSFERSIZE is added, is a much better option.
- Anonymous
- Anonymous
- Anonymous
- Anonymous
July 25, 2017
The comment has been removed - Anonymous
July 27, 2017
The comment has been removed- Anonymous
August 01, 2017
Hi Hari999,This error could indicate issues due to incorrect SAS token. If you are using Azure Portal to generate SAS token, make sure to remove the leading "?" while creating CRENDTIAL in SQL Server. Alternatively you can use the SSMS Backup UI to create credential and script out the BACKUP TSQL command. - Anonymous
August 11, 2017
There is no 1 TB limit at the container level. You should place all blobs into the same container. That said, as Rajesh mentioned and given the error message, it is likely that the credential you are using is incorrect.
- Anonymous
- Anonymous
April 23, 2018
Hi Dimitri,I want to backup a 1.5 TB database. Can this be done using maintenance plan or jobs in sql server 2016?I tried using maintenance plan but backup failed.Error: Executing the query "BACKUP DATABASE database name] TO URL = N'..." failed with the following error: "Use of WITH CREDENTIAL syntax is not valid for credentials containing a Shared Access Signature.BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.Any alternative to automate backup other than portal.Thanks in advance.- Anonymous
April 24, 2018
Maintenance plans do not support backup to block blobs, which is required when the size of backup exceeds 1 TB. For an automated backup solution, you can use SQL Server Managed Backup (https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-managed-backup-to-microsoft-azure), or the latest version of Ola Hallengren's maintenance solution (https://ola.hallengren.com/).If your database compresses well and you enable backup compression, then you may be able to use maintenance plans if you create the old-style credential using the storage account name and key, i.e. using the second option in this documentation topic (https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017#credential). I do not recommend this though, as backups could start failing in the future if the size of the backup file exceeds 1 TB.- Anonymous
April 24, 2018
Hi Dimitri,Thank you for the solution.As you said i tried to enable SQL Server managed backup through portal.When I configure and deploy, the SQLIaaS extension goes to fail state and the backup to does not configure.Error:Failed to update sql vm settings.The deployment UpdSqlVmAb-20180425122220 did not succeed. Check the deployment history of the resource group of the vm.When I check deployment history Operation nameWrite ExtensionsResourceDeploymentFailureMessageThe resource operation completed with terminal provisioning state 'Failed'.Any idea what might have went wrong?
- Anonymous
- Anonymous