Containers!! and Docker!!
Who would have thought that these two words will bring around the whole DevOps revolution. Most of the people reading this will already know what a Docker Container is and so I will try not to get too much into the weeds of what it is and what are the benefits of using one. But, just to give you a brief overview: “Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.”
Some might argue that a virtual machine gives us the same level of resource isolation and allocation benefits as containers, so why would we even think about switching over to the containers world. To help you understand the difference between the two, lets go back in time when there was no virtualization and each server in the datacenter was running a single application. You had dedicated servers for each application that you ran, and if you needed to deploy a new application in your datacenter, you would need to acquire a new server. If you had to order a new server, then the turnaround time for the application deployment was in months. And also a majority of the servers were under utilized.
Then came in virtualization where the concept of hypervisors enabled us to use the same server and its resources to deploy multiple applications on top of it as virtual machines. So the process of deploying a new application in your datacenter got easier and faster. Admins were now able to deploy new applications in a span of hours to days. The application was deployed as a virtual machine on top of a hypervisor. A virtual machine enabled the admin to have resource isolation even though multiple virtual machines utilized the same physical hardware. So a virtual machine got access to virtual CPUs, Memory and HDD and you basically installed the operating system that you would install on a physical server on to a virtual machine.
After this revolution, we elevated ourselves from one application per server to one application per virtual machine. And this worked perfectly for many years, till we encountered what we call VM sprawl, where people just deploy new virtual machines anytime they need to run some application and forget about the old VMs that are still running in your environment, still consuming resources!!. The thing the users forget is, even though you are consuming virtual resources, each time you spin up a new VM, you are installing an operating system on that VM. Even though an operating system might only consume say for eg 5-10% of the virtual resources that are assigned to the VM, this can lead to issues when you spin up a huge number of virtual machines.
This is where Containers come in handy. Containers provide you the same level of resource isolation and virtualized resources that you need to run your applications that you will get from a VM, but containers share the same operating system. So now when you want to deploy 20 applications, you do not need 20 virtual machines, 20 operating systems and 20 licenses for those vms. You just need one base Virtual Machine/Bare Metal install of a Linux OS and then you can run your 20 applications on that single Docker host. And if you are a Windows shop, you don’t need to worry. Microsoft with its Server 2016 version will fully support the Containers technology.
I think all this is enough to kickstart your journey into the world of Containers. I will keep updating my blog with additional posts around some things that I did in my lab with this new and exciting piece of technology. So “Watch this space” for future posts.!!