VMware Project Pacific – Making Kubernetes mainstream

VMware Project Pacific – Making Kubernetes mainstream

Happy New Year!! Hopefully, everyone had a great end of year/end of decade party and are now scrolling through their inboxes. As far as new year resolutions go, I did a terrible job last year at keeping up with my blog. Publishing only a few blogs wasn’t great, so I want to fix that this year. Since VMworld US I wanted to learn more and blog about Project Pacific, so that was on top of my list of things to blog about when I wanted to resume.

In this blog, we will discuss the different aspects of Project Pacific and look at how vSphere is evolving to help both the developers and operators adopt Kubernetes (K8s) in their data centers. To better understand Project Pacific, we will have to discuss the following three things:

  1. Supervisor Cluster: This capability allows us to use our ESXi cluster as a K8s cluster itself and run our containers directly on the ESXi hosts.
  2. Native Pods: Similar to K8s pods, but with a container runtime for ESXi. Also a collection of one or more containers.
  3. Guest Cluster: These are CNCF conformant K8s clusters that customers are using today to run their containerized applications.

So, let’s start by talking about Supervisor Cluster and Native Pods. To understand Supervisor cluster and the enhancements that VMware has made to the ESXi layer, let’s first look at how a regular ESXi cluster looks like:

(FYI: All the images used in this blog are screenshots from the VMworld session slides)

ESXi_Cluster

Each ESXi host has a host daemon (hostd) running locally which is responsible for managing the VMs and provides an API to the host. You can create an ESXi cluster with 2 or more hosts and manage all the hosts and the VMs running on top of them with a vCenter server instance. As part of Project Pacific, there have been modifications to both the vCenter Server and the ESXi hosts.

For the vCenter server, there are a few additional components that are shown in the image below: vCenter_ProjectPacific

  1. Workload Platform Service: This service enables the concept of namespaces for your cluster and exposes REST APIs to interact with Namespaces.
  2. K8s Client Bindings: Enables the workload platform service to talk to the K8s API Server. As operators, you can continue using the vSphere HTML5 client to set various settings and this service will translate those settings to K8s settings inside the namespace.
  3. Token Exchange Service: This service takes your vSphere SSO SAML tokens and converts them to JSON Web Tokens (JWT) to use with K8s.
  4. Bundle of Images: Image repository for the Control Plane Image and the Spherelet Bundle.

After you have deployed the new vCenter server, my assumption is that you would be able to enable Project Pacific functionality, similar to how you enable vSAN in your cluster today. As soon as you enable Project Pacific, the Workload Platform Service in vCenter will go ahead and install the spherelet binary onto each node in the cluster.

ESXi_ProjectPacifivc.png

spherelet is VMware’s implementation of the kubelet for ESXi hosts. Similar to how kubelet is responsible for lifecycle of the pods on each node in the K8s cluster, spherelet is responsible for the lifecycle of the pods on each ESXi host in the supervisor cluster. It is also responsible for reporting node health. In addition to the spherelet installation, vCenter will also deploy and configure three VMs to create a multi-master K8s control plane. Since you will have NSX-T running in your environment, you will have an NSX L4 load balancer load balance between the different control plane VMs.

Each Supervisor K8s control plane VM is built using OSS K8s binaries but adds a few services on top. ControlPlaneVM

  1. Scheduler Extension: When an operator tries to schedule a pod, the Scheduler extension will work with VMware DRS and find the best node in the supervisor cluster to deploy that pod.
  2. NSX Container Plug-In (CNI) and Cloud Native Storage (CSI): Both of these CNI and CSI plugins allow VMware to provide networking and storage support for your K8s workloads. If you are using PKS or running OSS K8s on vSphere, you should already be familiar with these two plugins. Using NSX-T based CNI plugin creates a new virtual switch for each new K8s cluster and each new namespace that you create. But, there is a catch here. Because of this integration, you will need to use NSX-T in your cluster if you want to implement and consume Project Pacific. The Cloud Native Storage plugin will allow you to consume any vSphere Datastore (VMFS, NFS or vSAN) for your K8s Persistent Volumes. If you are using vSAN, you will be able to extend the storage policy-based management (SPBM) capabilities to your K8s Storage Class definitions.
  3. Authenticating Proxy: Allows K8s to interact with the vSphere SSO domain.

Each control plane VM will also have two vNICs, one for management and one for the NSX Cluster traffic.

At this point, you have a vCenter Server and ESXi hosts that are ready to run your containerized applications. But, how do you do that? This is where the concept of Native Pods comes into the picture. Instead of using the default container runtime like docker, VMware introduced a new container runtime called CRX just for ESXi. So, when you run your containers as part of a Native Pod, the Native Pod will look and feel just like a VM. This is true because it will actually be a highly optimized VM that is deployed each time you deploy a native pod.

Native_Pod

Native Pods also contain a CRX optimized linux kernel, which includes an optimized vmx component. CRX will start up the linux kernel and jump directly into booting. So, the Native Pod (optimized VM) is really fast to start up and allows you to run your containers natively on the ESXi hosts. Native Pods are also deployed on the supervisor cluster to perform some of the system tasks like image fetching. These pods come and go based on the task they are supposed to perform. So, they aren’t long-running pods. You can also deploy the Harbor Container Registry as a native pod. Finally, Pod to Pod communication in the supervisor cluster is enabled by the Distributed Load Balancer running on each host instead of Kube-proxy in case of a vanilla OSS K8s cluster.

To summarize and give a complete picture of the Supervisor cluster and the native pods running on top of it, I think the following slide from VMworld EMEA works:

SupervisorCluster.png

It shows how new functionality is added to the vCenter Server and ESXi hosts to allow Users or Developers to run containers straight on top of the ESXi hosts using the concept of Supervisor cluster and Native Pods.

In the next blog post, we will look at the Guest Cluster functionality in Project Pacific which gives Kubernetes-As-A-Service capability to customers.

4 thoughts on “VMware Project Pacific – Making Kubernetes mainstream

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s