Running stateful workloads on Kubernetes doesn’t have to mean losing storage visibility or control. Learn from our new guide VMware Cloud Native Storage (CNS) bridges vSphere and Kubernetes with CSI integration.
As organizations modernize applications, the shift toward running stateful workloads on Kubernetes is accelerating. Gartner® states that by “2029, more than 95% of global organizations will run containerized applications in production.” VMware Cloud Native Storage (CNS) bridges the gap between traditional vSphere infrastructure and modern Kubernetes applications, allowing IT teams to manage container storage with the same tools and capabilities as virtual machines (VMs).
In this article, we will examine the CNS architecture, the vSphere Container Storage Plug-in components, and best practices for configuration.
Architecture of Cloud Native Storage (CNS)
The CNS framework allows vSphere to treat container volumes as first-class citizens, rather than opaque files hidden inside a VM. It consists of a control plane within vCenter and a driver running inside the Kubernetes cluster. It does this by providing a custom CSI (Container Storage Interface) driver that allows the Kubernetes cluster to provision storage on the underlying storage natively using APIs, as seen in the picture below.

This can allow, for instance, a developer to consume storage from vSphere on-demand in a fully automated fashion, while providing to the storage administrator with visibility and management of volumes from vCenter UI.
It is also important to understand some of the core concepts within Kubernetes storage that are involved underneath the CSI.
Firstly, we have the Persistent Volume (PV), which is the Kubernetes representation of the actual storage unit (the disk). It exists independently of the Pod lifecycle. Persistent Volumes can be provisioned in two ways:
- Static Provisioning: An administrator manually creates a disk in vSphere and then creates a PV object to map to it.
- Dynamic Provisioning: The PV is created automatically by the CSI driver when a user requests it, removing the need for manual administrator intervention.
The second part is the Persistent Volume Claim (PVC), which is the developer’s request for storage. It acts as a “ticket” that claims a specific amount of storage capacity and access modes, for example, “I need 10GB of fast disk.” When a valid PVC is created, Kubernetes looks for a matching PV. If one exists (or can be dynamically created), the PVC binds to it.
A Storage Class is an abstraction layer that allows administrators to define “classes” of service (e.g., Gold, Silver, Bronze) without exposing the underlying infrastructure details to developers.
So when the CSI driver issues a request from the developer, it is creating a Persistent Volume Claim.
Since CSI is a universal standard in Kubernetes, it is important to note that this feature works across all different Kubernetes platforms, as long as the platform is supported on VMware.
The CNS Control Plane
Within vCenter, we have the CNS control plane, which manages the lifecycle of container volumes (creation, deletion, snapshots, and health monitoring). It introduces the concept of First Class Disks (FCDs), which are virtual disks that exist independently of a VM.
Behind the scenes, this feature is essentially using the CSI driver to create a Persistent Volume that will be represented as an FCD.
Storage Policy Based Management (SPBM)
The other part is Storage Policy Based Management (SPBM). This is a mechanism that CNS uses to translate Kubernetes requests into vSphere storage capabilities. When a developer requests “Gold” storage in Kubernetes, CNS uses SPBM to place that volume on a datastore that matches specific IOPS, availability, or encryption policies defined by the vSphere administrator.
vSphere Container Storage Plug-in (CSI)
As mentioned, the interface to the underlying infrastructure is Container Storage Interface (CSI). VMware’s implementation of this is vSphere Container Storage Plug-in, which runs directly on the Kubernetes cluster and communicates with the CNS control plane.
Core Components
The plug-in consists of three primary sub-components that handle different tasks:
- The CSI Controller – this component runs as a Deployment in the cluster. It interfaces with vCenter to handle volume operations that do not require access to the specific worker node, such as creating a volume, expanding it, or deleting it.
- The CSI Node – this runs as a DaemonSet on every worker node. It handles node-specific operations, such as formatting the volume, mounting it to the node, and creating the bind mount that allows the pod to access the data.
- The Metadata Syncer – this pushes Kubernetes metadata (pod names, PVC names, namespace info) back to vCenter. Syncer ensures the vSphere administrator sees exactly which Kubernetes application is consuming a specific storage volume.
Storage Features and Compatibility
vSphere Container Storage Plug-in driver supports two main types of volumes:
- Block Volumes (vSphere Disks/FCDs): These are mounted as block devices. They support ReadWriteOnce (RWO) access mode, meaning only one pod can write to the volume at a time. This is ideal for databases like PostgreSQL or MySQL.
- File Volumes (vSAN File Services): These are backed by vSAN file shares. They support ReadWriteMany (RWX) access mode, allowing multiple pods to share the same data simultaneously. This is useful for web server logs or shared configuration files.
Supported Datastores
CNS supports various backing storage, including vSAN, VMFS, and NFS.
- vSAN: Offers the deepest integration, including support for file volumes and stretched clusters.
- Remote Datastores: vSphere 8.0 U3 and later support file volumes on remote datastores.
Configuration Limits and Scalability
To maintain stability and performance, vSphere Container Storage Plug-in has some limits that one should be aware of to ensure optimal performance.
Global Volume Limits
The total number of volumes supported depends heavily on the underlying datastore type.
| Datastore Type | Maximum Volume Limit | Notes |
|---|---|---|
| vSAN, NFS 3, VMFS | 10,000 volumes per vCenter | Applies to total volumes managed by CNS within one vCenter instance |
| vSAN File Shares | 100 shares per vSAN cluster | Specifically for file-based volumes |
| Multi-Access Clients | 100 concurrent clients | Max clients accessing ReadWriteMany (RWX) Persistent Volumes |
Also note that there are limits on the number of Persistent Volumes per node. Since PVs are consumed as local disks on worker nodes, the number of PVs that can be attached is limited by the SCSI controller. Each PVSCSI controller can host 15 disks. With four PVSCSI controllers, the number of PVs is limited to 59 (excluding one for OS disk).
Provisioning Storage: Practical Examples
So how can a developer dynamically provision storage using Kubernetes manifests? Firstly, we need a Storage Class defined, which acts as the bridge between Kubernetes requests and vSphere storage policies. It defines the “tier” of storage (e.g., Gold, Silver, Bronze) by referencing a specific SPBM policy name.
The following YAML creates a class named gold-sc that maps directly to the “Gold” storage policy in vCenter. Any volume created with this class will automatically be placed on a datastore compliant with the Gold policy rules (e.g., RAID-1, encryption enabled).
YAML
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gold-sc annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: csi.vsphere.vmware.com parameters: storagepolicyname: "Gold"
Secondly, a Persistent Volume Claim (PVC) is a request for storage by the developer. When a PVC references a Storage Class, vSphere CSI plug-in dynamically provides a backing virtual disk (FCD) or file share.
The following YAML requests a 5Gi volume using the gold-sc class. Because the access mode is ReadWriteOnce, this will be provisioned as a Block Volume (VMDK).
YAML
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: persistent-VMDK spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: gold-sc
So What Happens?
- Dynamic Provisioning: Kubernetes detects the claim and calls the vSphere CSI driver.
- Creation: The driver creates a 5Gi virtual disk on a datastore that satisfies the “Gold” policy.
- Binding: The PV is bound to the claim.
- Cleanup: When the claim is deleted, the underlying storage is deleted automatically (unless the reclaim policy is set to Retain).
Summary
This guide detailed the integration of VMware Cloud Native Storage and Kubernetes, bridging infrastructure control with agility. We examined the architecture connecting the Kubernetes CSI driver to vCenter, highlighting:
- Operational Model: How Storage Policy Based Management enables administrative governance while allowing developer self-service via manifests.
- Storage Mapping: Distinguishing between Block (RWO) and File (RWX) volumes for appropriate workload placement.
Integrating CNS turns vSphere into a platform-aware storage engine, eliminating operational silos. By leveraging SPBM and the CSI plug-in, organizations can deliver automated, policy-driven storage to their DevOps teams.
from StarWind Blog https://ift.tt/iCSe9Qn
via IFTTT
No comments:
Post a Comment