In our previous article, we took a deep dive into the two storage types, focusing specifically on SMB, NFS, iSCSI, and NVMe-oF protocols. So, check that out before reading on! Now, it is time to take it to the next level. Here, we will be talking about the storage types and protocols in the context of Microsoft’s Hyper-V environment.
A major problem that businesses often come across when setting up their Hyper-V environment is choosing the best storage type. This is because there are several factors to consider, especially in relation to their business requirements. So, what are some of the factors that these companies need to consider:
- Ease of deployment/integration – This is often tied to cost. Does the storage option require extra hardware or specialized staff to configure? Some companies may not have those resources to spare.
- Use case – 2-3 VMs for personal use, a general-use file server, or a full-size enterprise IT infrastructure with multiple VMs, apps, and services with mixed storage access patterns? It depends.
- Performance – need to squeeze all the juice from your all-flash array used as a primary storage, or are you building a read-intensive storage for backup repository?
- Shared access and high availability – Sometimes you can get away with running your prod from local storage on a single virtualization host, but for most businesses’ downtime is hardly tolerated and the ability to migrate VMs from one host to another is mandatory.
Hyper-V Storage Options
Businesses have a lot of storage options to choose from and that’s one of the reasons why it can be challenging to make a good choice. In this article, we are going to cover the most popular ones for mid-market for simplicity’s sake.
File Storage in Hyper-V
Among all the File level protocols available in Windows Server, SMB is the most popular. It can be used as a shared storage for Hyper-V in Windows Server 2019 and 2022. So, with this protocol, Hyper-V can store VM files on SMB file shares, which is especially useful for Small and Medium Businesses and ROBO scenarios. Here are some of the advantages of using SMB 3.0 in a Hyper-V environment:
- Ease of management and provisioning – Manage file shares instead of logical unit numbers and storage fabric.
- Increased flexibility – Dynamically migrate databases or VMs in a data center.
- Offload data transfer (ODX) reduces CPU overhead.
- Reduced capital and operating expenditures. This is because it is a native WS feature and protocol, so there is no need for any additional software licensing.
- Not good for Enterprise-level businesses because of performance limitations when used in workload-intensive scenarios.
- Factors such as security vulnerabilities, authentication, and access control add up to the complexity when implementing and configuring SMB 3.0.
- Compatibility issues with older hardware and software. So, if you are using an older OS like Windows 7 or Windows Server 2008 R2 and below, there is limited or no support for SMB 3.0.
There are many, such as Virtual machine storage in Hyper-V, SMB shares as shared storage for VMs in Failover Cluster, SMB shares as storage for MS-SQL. SMB is also powering Windows-native SDS – Microsoft Storage Spaces Direct (S2D), but that’s another story.
Like SMB, NFS is also a file level protocol that is supported in Windows Server. The reason it is not as widely used as SMB in a Hyper-V environment is because it can’t be easily configured as a VM storage. It requires additional tinkering with virtual machine disks and the resulting solution will not be stable. That is why we don’t consider NFS a viable option for use in a Hyper-V environment.
Block Storage in Hyper-V
Unlike the previous two mentioned before, iSCSI is a block-level protocol. This means that it allows users to set up a shared storage network where they can remotely access the network drives over a standard TCP/IP network. To set up iSCSI in a Hyper-V environment, it is necessary to first configure the target with appropriate access control and security settings. In addition to that, on each Hyper-V host, configure the initiator to discover and connect to the target using the target’s IP address.
- Widely supported by all operating systems and hypervisors
- iSCSI uses standard Ethernet networks, eliminating the need for specialized and costly Fibre Channel infrastructure. This makes it a cost-effective solution for enterprise-level Hyper-V deployments.
- The iSCSI protocol supports scalability by allowing multiple hosts to access shared storage resources simultaneously. This enables the expansion of storage capacity and the ability to handle growing VM workloads. To do so, you need a cluster and CSV, otherwise, unlike SMB, iSCSI can be connected only to a single initiator simultaneously without data loss.
- iSCSI allows for centralized storage management, making it easier to allocate and manage storage resources for Hyper-V environments.
- It also supports features like Multipath I/O for performance optimization.
- One of the biggest drawbacks of the iSCSI protocol is the absence of file locking. That is why you need cluster and CSV to get multiple clients to access the same storage.
- Since iSCSI relies on Ethernet for storage traffic, any issues with the network such as latency or packet loss can impact the storage performance and overall system responsiveness.
- iSCSI is not the best fit for workloads that require extremely low latency.
iSCSI is a universal storage protocol that can be used for accessing any supported block device over network. However, one of the primary use cases of iSCSI is for virtual machine storage in a Hyper-V environment. StarWind vSAN as an excellent example of high-performance cluster-aware iSCSI implementation on Windows.
This is one of the most recent block protocols that allow for the utilization of high-performance NVMe storage devices over a network in a Hyper-V environment. NVMe-oF has the potential to replace iSCSI in the future.
The main reason why it may replace iSCSI is that it was created as a leaner block protocol for solid state (flash or other non-volatile memory). So, NVMe-oF eliminates the SCSI layer from the protocol stack and delivers lower latency than iSCSI.
The problem with this protocol is that Microsoft has been on the edge of RDMA mass adoption with their SMB Direct feature. Perhaps, that’s why they are not planning to invest their resources into the development of Windows-native NVMe-oF Initiator any time soon.
Currently, NVMe-oF use cases are quite limited in Windows-based environments since there is still no “official” support for connecting to remote NVMe storage. Recognizing this gap, we at StarWind have stepped in and released our own NVMe-oF Initiator for Windows, with both free and commercial versions available for use.
Overall, each protocol has its strengths. Windows-native SMB’s ease of use and advanced features set make it a solid choice for VM storage in small Windows-based environments. Even though NFS is similar to SMB, it is not an option for VM storage and can only be used as a general-use file server.
Even though iSCSI is slightly harder to configure and manage, it’s universal hardware compatibility, and centralized storage management makes it a perfect choice for a wide variety of use cases.
Finally, NVMe-oF seems to be the future, but is only starting to get traction. It does not have native Hyper-V support. However, this can be worked around by implementing StarWind’s NVMe-oF Initiator. Still, the performance potential it unlocks is rarely required in mid-market applications. Well, that is for now. Who knows what the future holds?
Overall, understanding the “quirks and features” of each protocol is important for choosing the right virtual machine storage for Hyper-V. The choice itself, however, depends on your unique requirements, available resources, and the specific workloads your infrastructure needs to support.
This material has been prepared in collaboration with Asah Syxtus Mbuo, Technical Writer at StarWind.
- Virtual Machine Storage – File vs Block [Part 1]: SMB & NFS vs iSCSI & NVMe-oF
- Where to keep your backups? Storage types explained
from StarWind Blog https://bit.ly/3D0Pkdi