VMware Project Pacific – Guest Clusters

VMware Project Pacific – Guest Clusters

In the previous blog post, we introduced Project Pacific and dived deep into the concept of Supervisor Cluster and Native Pods. As promised, in this post we will be talking about the third aspect of Project Pacific, which is Guest Clusters. Guest Clusters will enable self-service delivery of Kubernetes clusters On-Demand to developers using the same declarative API that they are familiar with already. Guest Cluster allows developers to deploy conformant Kubernetes clusters on vSphere running as virtual machines. Developers or Users will have complete control over the number of virtual machines deployed as part of the control plane and worker nodes and the version of Kubernetes running on the cluster, without any admin intervention.

To understand how a Guest Cluster delivers these benefits, we will discuss the following three things:

  1. Guest Cluster Controller
  2. Cluster API
  3. VM Operator

To understand these three things, let’s look at the workflow that will be executed to deploy a guest cluster on vSphere.

(FYI: All the images used in this blog are screenshots from a VMworld session)

GuestClusterWorkflow

To deploy a guest cluster:

  1. A user will deploy a yaml file like the one below. The Yaml file will be executed against the Guest Cluster Controller. The Guest Cluster Controller is an easy to use and highly opinionated layer that abstracts away all the complexity from the user. It takes in the yaml file and then works with the Guest Cluster Manager to generate Cluster and Machine Resources. Similar to any K8s yaml file, this guest cluster yaml also has a spec section, where the user can define things like:
    • Number of VMs and Storage Class for the control plane VMs
    • Number of VMs and Storage Class for the worker nodes in the K8s cluster
    • Kubernetes distribution version to deploy
      ManagedClusterYaml.png
  2. Once the Guest Cluster Manager receives the yaml file, it forwards the request to the Cluster API Controller that is running as part of a management K8s cluster in vSphere. At this point, I am still not clear if the supervisor cluster can be used as the management cluster for Cluster API or there will be a new management k8s cluster deployed for each Namespace in vSphere. When working with Google Cloud Anthos, each management cluster could support up to 10 User clusters. I think we will get better information about config maximums once this functionality is generally available. If you want to learn more about Cluster API, you can read Scott Lowe’s post right here. But, to summarize Cluster API provides the same functionality for your K8s cluster that Kubernetes provides for your containerized applications. Using the Cluster and Machine Spec, you can define things like the number of VMs and versions of k8s running in your cluster and you can non-disruptively upgrade the version of K8s using the concept of MachineDeployments (Similar to Deployments in k8s). The Management Cluster will run a reconciliation loop to ensure that the current state (Number of VMs, etc) always matches the desired state as defined by the user.
    ClusterAPI.png
  3. Once Cluster API translates the user requirements into VMs that need to be deployed, the call is forwarded to the VM Operator. The VM Operator is where the rubber meets the road. VM Operator takes in the specifications defined by the user and interacts with vCenter to deploy the VMs and attaches them to the required networks. In the future, VMware will also allow users to directly interact with the VM operator to deploy and manage VM resources along with their K8s clusters in the future.
  4. This is the workflow that is followed every time a user will deploy a yaml file for a guest cluster. The Guest Cluster Controller will create a Managed Cluster entity, that will be translated into Cluster and Machine level entities by the Cluster API and then eventually translated into actual VMs by the VM Operator.
    ThreeLayerGuestCluster.png

The Guest Cluster will be deployed in a separate non-management network, non-supervisor network to ensure that there is logical separation of the clusters. From an authentication and authorization perspective, users can use their vSphere SSO credentials to interact with the cluster. This also makes it easier for the Virtualization Admin, as he/she can control access to Guest Clusters or vSphere Namespaces using vSphere SSO and existing HTML5 client. To interact with the Guest Cluster, the user will basically ask for a key from the vCenter SSO service using his SSO credentials. This also ensures that none of the SSO credentials are directly exposed to the Guest clusters. Once the user has access to the cluster, he/she can deploy workloads, increase/decrease the number of worker VMs and control the version of Kubernetes running on the cluster, providing self-service capability to the user, thus increasing developer velocity.

Hopefully, these couple of blog posts gave you a good understanding of Project Pacific. I will publish additional posts when the product is generally available. In the next post, we will focus on the “Manage” part of VMware’s strategy and talk about Tanzu Mission Control.

One thought on “VMware Project Pacific – Guest Clusters

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s