Storage Spaces Deployment Guide for Automation Scripts

Summary

Please provide questions/feedback on this guide and scripts to spaces_deploy@microsoft.com.

This guide will instruct you on how to create and test a new Storage Spaces deployment using a set of automated PowerShell scripts. The tools provided here are very generic and can be used with many different hardware configurations to create virtual disks or other storage for your enterprise workloads. While the tools are available to the general public, we strongly recommend working with an approved OEM partner when planning and configuring Storage Spaces because they can provide a higher level of IT support.

Before using these scripts, be certain to review the Storage Spaces Planning and Design guide to ensure that you use acquire hardware that will meet your needs.

Workflow - brief overview

  1. Selecting and installing physical hardware
  2. Validate hardware performance and health using ValidateStorageHardware.ps1
  3. Create storage pools and virtual disks using ProvisionStorageConfiguration.ps1
  4. Create a Failover Cluster with all of the storage nodes
  5. Add all of the shared drives to Cluster Shared Volumes
  6. (optional) Test performance of the cluster using TestStorageConfigurationPerf.ps1

Workflow – detailed

Select and Install Physical Hardware

Hardware selection

When choosing an initial hardware configuration, please refer to the Storage Spaces design and planning document. In most cases, the number and type of physical drives you choose will determine the maximum IOPS and throughput you will see from the system. This will have a direct impact on workload performance.

In general we recommend the following guidelines for Storage Spaces deployments:

  1. All hardware should be uniform. Select only 1 model of JBOD, 1 model of SAS HBA, and try to limit the number of drive models to 2.
  2. Ensure that all hardware (drives, servers, NICs, and controllers) has the latest available firmware version.

Again, we highly recommend working with an OEM when building a Storage Spaces solution because they can provide more support and expertise.

Physical Drive Placement
  1. Each JBOD should have the same number of SSDs and HDDs
  2. SSDs and HDDs should be placed within each enclosure according to manufacture recommendations (performance, thermal ratings, etc.)
SAS Fabric configuration

For best performance, each SOFS node should have at least 2 direct SAS connection to each JBOD. Be certain to choose SAS HBAs with enough ports to support the number of enclosures that you will need.

Finishing up

At the end of this step, you will have properly configured the physical hardware and infrastructure needed for a Storage Spaces deployment. You are now ready to begin testing the storage hardware to ensure that it will be performant enough to meet your needs.

For more information on how to size and configure hardware for Storage Spaces, see the Storage Spaces Planning and Design guide.

ValidateStorarageHardware.ps1

https://gallery.technet.microsoft.com/scriptcenter/Storage-Spaces-Physical-7ca9f304

Once all the physical hardware has been configured, but before creating storage pools or initializing the disks in any other way, run the ValidateStorageHardware.ps1 script to check the health of all the drives and do some basic performance testing on the physical hardware.

For a 240 disk deployment, you can expect this script to take about 8 hours.

Workflow

Prior to running the script, the node must have the following programs installed to the same directory where ValidateStorageHardware.ps1 is contained:

For example, you can put ValidateStorageHardware.ps1, diskspd.exe, and dskcache.exe in the C:\Users\User1\Desktop folder.

To run the program, use either of the following:

PS>> .\ValidateStorageHardware.ps1

PS>> .\ValidateStorageHardware.ps1 [-OutputPath <string>]

This will run the hardware testing tool and identify any drives that might be unhealthy or need replacement. Read through the output to make sure that all (or most) of the physical disks have been given the PASS rating. If there are a few outliers, then you may want to consider replacing the FAIL or WARN drives. If many physical disks are rated as FAIL or WARN due to IO latency, you might want to consider if the drive models you have chosen will meet your needs. If you believe that the drives will still be able to meet your workload requirements, you can safely continue. Talk to your OEM for guidance if you are unsure of how to proceed.

Notes

The script has code in place to support firmware blacklists, but the lists are all currently empty. If you discover Spaces-incompatible firmware, please contact spaces_deploy@microsoft.com to update the script.

Finishing up

Now that you’ve finished this step, you’ve validated that the drives are performant enough to meet your needs and you’re ready to begin provisioning Storage Spaces.

For more information, see Storage Spaces – Designing for Performance and Determining disk health using Windows Powershell.

ProvisionStorageConfiguration.ps1

https://gallery.technet.microsoft.com/scriptcenter/Storage-Spaces-Automated-3caf249a

After hardware validation, you can begin creating Storage Pools and Virtual disks from the disks. Run the ProvisionStorageConfiguration.ps1 script with:

PS>> .\ProvisionStorageConfiguration.ps1

PS>> .\ProvisionStorageConfiguration.ps1 -Automated

PS>> .\ProvisionStorageConfiguration.ps1 –PhysicalDisks <physical disk objects>

The first option is recommended and will add all of the SAS disks to the storage pool. If you want to use a different set of disks (ie. You want to exclude some disks from your configuration because they will not be clusterable), pass in a different set of disks via the command-line argument.

The –automated flag will create storage pools without any user input. The default configuration uses 2 physical disk redundancy and reserves 30% of the HDD tier capacity for backup workloads.

You can expect this script to take about 30 minutes to run for a 240 disk deployment.

During execution (if –automated is not used), the script will ask for 2 pieces of information from the user:

  • Desired physical disk redundancy
  • % capacity of the HDD storage to reserve for backup

The fraction of storage allocated for backup will be used to create parity storage spaces, while the rest will be allocated as mirrored spaces.

You should provide these numbers after carefully considering your final workload. To make this decision easier, you should use the Software-Defined Storage Design Calculator. After you provide these numbers to the script, you will be presented with the calculated deployment. You will be given the option to cancel the operation so you can choose different parameters if you don’t think the proposed configuration will work for you.

Finishing up

When the script has finished running, all of the available physical drives passed in will have been added to storage pools, and each pool will have been carved up into several virtual disks.

Cluster Storage Nodes

The next step is to create cluster from all the storage nodes which have a direct connection to the physical disks. This can be done using PowerShell or the Failover Cluster Manager UI.

Powershell

PS>> New-Cluster –Name <string> -Node <StringCollection>

PS>> Test-Cluster

Failover Cluster Manager UI

Launch Failover Cluster Manager and click “create cluster”. During the wizard, be sure to run all of the cluster validation tests.

Finishing up

At the end of this step, you will have created a cluster to ensure greater availability for the Storage.

Add storage to Cluster Shared Volumes

In this step, the virtual disks created by ProvisionStorageConfiguration.ps1 will be added to Cluster Shared Volumes to allow all the nodes to have simultaneous access to the shared storage resources. This step can also be done using PowerShell or Failover Cluster Manager.

Be sure that the cluster quorum disk is not being used as part of the storage pools. If it is, you will need to change the quorum settings. See Steps for changing quorum configuration for more info about changing the quorum settings.

PowerShell

One line, which can be run from any node.

>> Get-ClusterResource *disk* | Add-ClusterSharedVolume

Failover Cluster Manager UI

Select all the available disks, right click, and choose “Add to Cluster Shared Volumes”.

Finishing up

At the end of this step, you will have added all of the virtual disks to the cluster shared volumes. You are now ready to begin running workloads on the shared storage resources.

TestStorageConfigurationPerf.ps1

https://gallery.technet.microsoft.com/scriptcenter/Storage-Spaces-Performance-e6952a46

The final step in this guide tests the performance of the Storage Spaces deployment and outputs benchmark performance from synthetic IO tests. The benchmarks should be treated as theoretical maximums. The performance benchmarks simulate generalized workloads for the mirrored and parity spaces. The workloads we use are generalized diskspd runs and might not represent the performance you will see from your system under your specific workload. The performance test validation done here is not a substitute for a complete validation under your unique workload.

This takes about 10 hours to complete on a 240 disk system.

Workflow

Before running this script, each storage node must have its own copy of diskspd.exe on its local volume. The path to diskspd must be identical on each node.

PS>> .\TestStorageConfigurationPerf.ps1 -pathToDiskspd C:\Users\User1\desktop

When the script has completed, output will be visible on the console. Review the output to make sure that the theoretical performance you see is similar to the performance you had planned for.

For more information on Storage Spaces performance, see the Designing for Performance TechNet article.

Finishing up

Congratulations! You have finished deploying and validating a Storage Spaces configuration. You are ready to begin running real workloads with your new environment.