Part II: What is Storage Spaces Direct?

In Part I of this series I discussed enabling the Hyper-V role inside of a VMware Virtual Machine.

Now alternatively to vSphere Hyper-V, can also be enabled within Windows 10 Build 10565 or later device with just a few additional steps. This is great for testing out new features of Hyper-V locally.

What is Storage Spaces Direct (S2D)?

S2D is Microsoft’s software defined storage offering that utilizes standard x86 servers with either locally attached drives or external storage enclosures to deploy a highly available Hyper-V (main use case) environment. S2D can be either deployed as a Hyper-Converged (Aggregated) or Converged (Disaggregated). S2D leverages Windows Failover Clustering, Storage Pools (to aggregate the disk together) as well as the updated file system, ReFS 3.1 as well as the Cluster Shared Volumes (CSV) file system to unify all of the ReFS volumes – C:\ClusterStorage\

Converged aka – Disaggregated – S2D Deployment Option (Image courtesy of Microsoft Corporation)

The converged offering leverages an external Scale-Out File Server (SoFS) that the S2D enabled Cluster would connect to remotely via SMB3. This methodology allows for the ability to scale the compute infrastructure separately from the storage.

hyper-converged-minimalHyper-Converged – Aggregated – S2D Deployment Option (Image courtesy of Microsoft Corporation)

The hyper-converged offering combines both the compute and storage together via locally attached disk that is contained within the Hyper-V nodes. By utilizing local disk the requirement for the external SoFS is removed.

Let’s Get Rolling – Storage Setup

In my S2D evaluation deployment I have 4 vSphere VMs running Windows Server 2016, added to my domain with proper DNS configuration. I have also added  4 extra thick-provisioned data disks to the VMs.


Then in the Guest VM Enable Hyper-V the S2D Deployment you’ll need to run this command against all 4 of the VMs that will be used.

So now we have 4 Hyper-V enabled VMs with locally attached disk and we’re ready to build our Storage Spaces Direct Cluster.

Network Configuration

S2D relies very heavily on a fast, low latency network connection between the nodes within the cluster. It’s highly recommended to utilize 2 10Gbe network connections and to make things perform even better, you should consider purchasing RDMA capable network adapters. Just note that RDMA is not available within a VM, so for this example this is not an option – but when deploying in production you’d definitely want to make sure that you’re checking with Microsoft and your hardware vendor to make sure that the adapters are WS2016 certified.

Normally we’d now use Server Manager to create a Network Team, however in WS2016 Hyper-V there’s a new switch called Switch Embedded Teaming (SET) which allows NIC ports to be used for both the Guest VMs as well as the main parent partition to have RDMA connectivity. Again this would be how you’d deploy for real, in production.

However, for simplicity in our evaluation I am utilizing 2 1Gbe Network adapters (see screenshot above). One is for Client Traffic and the other is for Cluster Traffic. But we’ll configure this specifically later.

Next Steps…

In Part III of this series I’ll share with you the steps to configure Hyper-V as well as how to configure S2D itself.

Cheers- /cw

Add a Comment

Your email address will not be published. Required fields are marked *