Scaling without Compromise with vSAN HCI Mesh and VCF Remote Clusters

Scaling without Compromise with vSAN HCI Mesh and VCF Remote Clusters

In the previous blog, we discussed the new features that were announced as part of vSphere 7U1, vSAN 7U1, and VCF 4.1 that enables VMware to deliver a developer ready infrastructure. In this blog post, we will discuss more features that allow customers to scale their infrastructure without compromising reliability, availability, or security.

So, let’s with vSphere 7U1.

With the new version of vSphere, you can have 50% more hosts (96 hosts) as part of a single vSphere cluster and the ability to run 10000 VMs. Please note that the 96 hosts upper limit doesn’t apply to vSAN 7U1. If you are using vSAN clusters, the limit is still 64 hosts. vSphere 7U1 also comes with a new version for VM hardware, allowing each VM to have up to 768 vCPUs as compared to the previous limit of 256 vCPUs. Moreover, each VM can have up to 24 TB of memory. This is great news for all your in-memory database instances (E.g. SAP HANA) that were restricted to 6 TB until this release.
Another interesting feature is EVC (Enhanced vMotion Compatibility) support for graphics. Virtualization admins have been using EVC to mix and match servers in their cluster and still being able to vMotion VMs between different generation hosts. Now the same capability extends to graphics as well. You can define a new EVC graphics baseline for your clusters using Direct3D 10.1 or OpenGL 3.3 specifications and still use vMotion for those graphics-enabled VMs across various hosts.

But, let’s not forget about security. VMware has been building security into their products rather than bolting it on after the fact, and this new announcement with AMD adheres to that principle. vSphere 7U1 will now support AMD SEV-ES to provide Defense in Depth. AMD SEV-ES allows operators to encrypt each VM running in the cluster with its own encryption key. So, even in the case of VM escape, a malicious actor trying to read data from a different VM’s runtime, will only get access to the encrypted text. Please note, you will need the AMD EPYC 7xx2 CPUs if you want to use this feature, and you will need your guest OS to support this feature as well. However enabling this feature will disable vMotion, memory snapshots, hot-add resources, etc for your VMs.

Next up, let’s look at some of the features in vSAN 7U1, that help users scale and get a bigger bang for their buck. And the first thing we need to talk about is HCI Mesh. VMware HCI Mesh, allows operators to use capacity across independent vSAN clusters. So, let’s say you have 3 different vSAN clusters (A, B, and C) with different storage media, you can now mount the vSAN datastore from cluster A, as a Remote datastore in Cluster B and Cluster C. VMs running on clusters B and C, can continue running on those clusters, but now they can use the vSAN datastore from Cluster A as the storage backend. Operators can now leverage excess storage capacity from one cluster, while running VMs on a different one. This also enables operators to perform compute-only vMotion operations if needed. And, it still uses vSAN’s native protocol under the covers for efficiency and simplicity.

Another feature introduced in vSAN 7U1 is the ability to just enable compression (v/s both Deduplication and Compression) on your cluster. Again, you will still need an all-flash cluster to use this feature. You can improve the I/O throughput for your demanding workloads that were unable to take advantage of deduplication anyways. It enables vSAN to perform parallel data destaging per capacity drive, resulting in higher throughput. Another feature that should help people get more usable storage, is the reduction in the 30% slack space requirement per vSAN cluster. You now have smarter reserves called operations reserve (for operational tasks), and host rebuild reserves (for failure scenarios). The requirements for these reserves reduce as the number of nodes in your cluster increases (18% for a 12 node cluster to 14% for a 24 node cluster and 12% for a 48 node cluster). These reserves can be enabled/disabled from the vCenter interface and show up in the capacity management UI for easier tracking. And the last feature in vSAN is the ability to use a shared witness VM for up to 64 2-node clusters. It drastically reduces the operational and technical overhead, as you go from one witness VM per single 2-Node vSAN cluster to one witness VM per 64 2-node clusters.

Now, that we know the enhancements with vSphere and vSAN, let’s talk about VCF 4.1. In addition to all the above features, VCF 4.1 also introduces the concept of Remote Clusters. VCF remote clusters can be an independent workload domains with a dedicated vCenter instance, or a shared vCenter instances between multiple remote clusters. This allows users to extend the same operational and lifecycle management simplicity to all their edge locations. You no longer need to deploy a management domain at each site, or use the VCF consolidated architecture with federation. VCF remote clusters need at least 3 nodes per cluster with a maximum of 4 nodes per cluster. In addition to running at least VCF release 3.9, each remote cluster should also have dual redundant links with at least 10 Mbps bandwidth and 50 ms latency. Now you can choose between a 2-node vSAN cluster or a 3-node VCF remote cluster for your edge locations.

Hopefully, this blog gave you a good overview of the new scaling features introduced today. Check out my previous blog on Delivering Developer Ready Infrastructure, or the next blog on how VMware is further Simplifying Operations for IT operators.

One thought on “Scaling without Compromise with vSAN HCI Mesh and VCF Remote Clusters

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s