Understanding Kubernetes Networking can be a challenge.
A couple of years ago I was tasked to setup a distributed Minio instance running in containers for use with a Veeam SOBR S3 compatible capacity tier. At first, I thought about doing it on Kubernetes but very quickly realized that I was in over my head. I had no previous experience with Kubernetes and I could not just “wing it”. Among other things the networking piece I found especially hard to understand.
In the end I created a Docker Swarm cluster which had a much easier almost “plug and play” overlay network and while that did the trick, I understood that simplicity also meant rigidity.
Kubernetes follows the age old *nix (Unix, Linux BSDs and so on) philosophy of creating small separate entities that when brought together can scale into something very complex. Networking is no exception.
While a Kubernetes cluster does come with some default networking called kubenet it is very limited and not meant for production environments from what I understand.
What is CNI? Container Network Interface officially is a CNCF (Cloud Native Computing Foundation) project https://github.com/containernetworking/cni and its use is not solely restricted to Kubernetes.
A basic explanation is that CNI is an agreed upon standard by which 3rd party organizations can write plugins. These plugins can then be leveraged by Kubernetes for its networking.
Kubernetes is very flexible, you can bend it to do what you want, exactly the way you want, and being declarative in nature it will do its best to keep your vision of the world always in that state with minimal need for your further imperative interactions. To achieve this the initial design and setup are very important.
One of the choices that you need make when setting up your own Kubernetes cluster is which CNI plugin to use. They come in many different flavors and offer diverse capabilities.
For example, after you initialize a Kubernetes cluster with Kubeadm and perform a “kubectl get nodes” command, the result will show you that the nodes are “not ready”. This is because no CNI network plugin has yet been deployed.
The actual deployment of a CNI plugin is normally quite simple. You apply a yaml file provided by the organization that wrote the plugin.
For example, to deploy Flannel plugin you can run this command.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Here is a link to the official Flannel github page: https://github.com/flannel-io/flannel
For Calico you can run a similar command or use their operator.
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
Calico git hub page:
https://github.com/projectcalico/calico
For Weave net:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
WeaveNet git hub page:
https://github.com/weaveworks/weave
The plugins will deploy a deamonset to your cluster so that your CNI plugin has a pod present on each node in order for networking to function properly.
You can find a list of all the CNI plugins that you can use with Kubernetes here:
https://kubernetes.io/docs/concepts/cluster-administration/networking/
I have tried to provide a brief and basic overview of what CNI is. However, there are much better and deeper technical descriptions out on the web.
Here are some good places to start:
An excellent video about CNI from Calico:
a 7 minute explanation about CNI:
How to install the Flannel plugin:
I hope this helps everyone get a basic understanding of CNI.