Borg and Kubernetes
Since the CKA Kubernetes certification is a practical exam with no multiple-choice questions you must wait a day or two until you find out your result. I was on sitting on my back porch when I checked my email and saw the ‘Congratulations” in the title and I literally shouted, “I have Kubernetes!!”. My neighbor who is not IT savvy and witnessed my strange behavior immediately thereafter began doubling his social distancing measures with me. This was the “Covid19 summer” of 2020, and I realized that many people had no idea of what Kubernetes was and to be fair it does sound like something that you can catch.
So, what is Kubernetes and why is it being talked about so much?
To try and explain why this has become such a hot topic I like to think back to the virtualization revolution. It used to be that when a company wanted to add a new application server the process was a very long and labor intensive one. You had to order the physical server, then you had to rack it, cable it, install the OS and finally install your application. Virtualization changed the way IT worked and more importantly the speed at which it worked. All you now had to do was order your hardware and from there you could install your VMs to match demand for new services and application servers. Deployments became much easier and much faster. Tasks that took days to complete before, now could be completed in only a few hours
However, you still needed in most cases to provide an entire operating system to run just one application. This was a scaled down but similar situation to when you needed an entire physical server for that same application.
At some point the question was asked what if, for example, you could create your database application in a bubble using only the libraries and OS components needed for that application. A container would be that bubble with everything that the given application needed wrapped tightly into one entity, nothing more and nothing less. Lean and mean is the phrase that comes to mind and that is what containers are. Obviously, the gains from doing this for one or two applications might not be too visible, but when you have thousands upon thousands of VM’s being reduced to much smaller containers this translates into an enormous savings of resources. You can do a lot more with less.
Then there is that other age-old problem that developers experience “but it worked on my machine!” This situation is when code that had worked flawlessly on one system did not on another for a myriad of possible reasons from different library versions to differing levels of OS versions and upgrades. A container however has none of these problems. The container running on a developer’s laptop can be the exact same container running in a huge data center.
Often a solution creates another problem and running an army of containers turned out to be a huge operational headache. That is when orchestration came into play. It is common knowledge that Google runs many online services, but it is not evident to everyone how they manage to do this and have minimal downtime while being able to scale up and down depending on demand. The challenge was how to be ready for major fluctuations in demand for services without spending an enormous amount of money on resources and engineer work hours. Their answer to this problem was Borg. It was the orchestration tool that was able to automatically scale up and scale down services when needed. It also had auto health checks and auto healing capabilities. The system was declarative in nature which meant that you wrote down in code the way you wanted the world to like and then the orchestration system automatically did the work and kept that vision alive. In 2014 Google decided to Open Source a version of Borg and renamed it Kubernetes. The rest is history.