Table of Contents
Introduction
This post is the third part of a blog series about Docker networking. For a better understanding, we recommend that you go through the first and the second part before proceeding with this post. While parts one and two deep dive into Docker networking, this post addresses Kubernetes networking. If you're interested in monitoring Kubernetes, jump over to our guide on Monitoring Kubernetes with Prometheus.
If you are familiar with containerization and microservices philosophy, and the different network types and levels, you've already mastered a good part of Docker networking. However, in real-world scenarios and in production-grade environments, containers are just a part of a whole landscape. To be able to support production workloads, Docker needs to run inside a container orchestration system. Two of the most popular orchestration frameworks are Docker Swarm and Kubernetes. These platforms are used to automatically manage and schedule containers inside a cluster of nodes, or virtual/physical machines.
Orchestration is just as important as containerization, and it needs additional knowledge - in particular networking.
Let's take into consideration Docker Swarm and Kubernetes. If you are already familiar with the Docker networking model, you will be able to understand the greater part of Docker Swarm network paradigms by going through part two of this series. In that post, we created three Docker hosts using Docker Machine: a manager and two workers. Therefore, we have seen how to establish an ingress network between the containers living in different machines.
When you use Kubernetes, things are slightly different, especially the networking model. Understanding how Kubernetes networking works is essential if you want to run Docker in production. This post will look into a few different kinds of Kubernetes networking.
Kubernetes Networking
In Kubernetes, we talk less about containers and more about pods. This does not mean that containers are less important than pods, but containers run inside pods.
Probably, the best definition of pods is the official one: Pods are the smallest deployable units of computing that can be created and managed in Kubernetes. Containers running inside the same pod share the same resources, notably networking. This shared context in networking implies tools and configurations like Linux namespaces, cgroups, and possibly other isolation and control mechanisms are shared.
In a Kubernetes cluster, all pods should be able to communicate with all other pods in their cluster without NAT, wether those pods belong to the same node or not. Another condition to establish the Kubernetes networking models concerns the communication between agents and pods: Kubernetes agents, like kubelet, should be able to communicate with pods inside the same node.
Containers within a pod are able to reach each other’s ports on localhost.
Finally, a pod has a unique IP address, and it sees itself with the same service IP (ClusterIP) address as what is seen by the other pods in the same cluster.
To implement the described networking mode, the Kubernetes administrator can use open source tools like Project Calico, OpenVSwitch, or Cilium. When using a managed cluster, the implementation is ensured by the cloud provider (e.g., AWS VPC CNI for Kubernetes, Azure CNI for Kubernetes, Google Compute Engine .etc.).
In order to deep dive into the networking model, let's understand four important concepts:
- Container-to-Container Networking
- Pod-to-Pod Networking
- Service-to-Pod Networking
- Internet-to-Service Networking
Container-to-Container Networking
The scenario of containers running in the same pod is quite comparable to running containers in the host network of the same virtual machine. For instance, a container can reach the other containers using a different port on localhost.
A common example of deploying multi-container systems in the same pod is when an application needs another container to run like proxies. Istio, for instance, uses this mechanism to run its proxy and intercepts all pod network communication in order to control, manage and secure it. The communication should be done through the loopback network interface, localhost.
Pod-to-Pod Networking
In a Kubernetes cluster, pods within a node are able to communicate with each other using a specific method. Pod-to-pod communication is done using their IP addresses since each pod has its own dedicated IP.
To understand the internals of this type of networking, we should note that every pod operates inside its own ethernet namespace. This means that Kubernetes is able to establish communication between different namespaces on the same node. Note that in this part of this blog post, we are concerned about networking in the same node.
In a Kubernetes cluster, the eth0 device interface of a pod is connected to a virtual ethernet device in the same node (vethX). It is the tunnel that connects a pod to the other pod's networks (for pods within the same node). The eth0 is on the pod side while the vethX is on the node's side.
In other words, since each pod exists in its own ethernet namespace, the virtual ethernet device allows a pod to communicate with other network namespaces on the same node.
But what if pods are not in the same cluster?
A Kubernetes cluster has a route table that maps a pod IP address to the node where it is running. The cluster assigns each node a different range of IP addresses, a CIDR block. A pod running in a given node, will have been assigned IP addresses from that block.
When a pod communicates with another one in a different node, the cluster routes traffic from the pod to the right node where the target pod exists using the route table. When a packet reaches the right node, the eth0/vethX mechanism explained previously, routes this packet to the right pod.
Pod-to-Service Networking
Containers are ephemerals, and so are pods. But, what does this mean at the networking level?
A pod has an IP address, but at the same time it may disappear, and Kubernetes will replace it with a new one. A new IP address will be generated for the pod. In order to overcome this challenge, we use an abstraction layer on top of the pod concept. This layer is handled by Kubernetes services.
A service assigns a single IP address to a set of pods. In this case, any traffic addressed to a pod is routed through the virtual IP address of the service.
This mechanism not only allows Kubernetes to overcome the dynamic nature of pods networking but also to retire dying and unhealthy pods from receiving traffic.
Internet-to-Service Networking
Except when your application needs to run in a network isolated from the external world, this type of networking is necessary if you want to access the Internet from within the cluster, or if you want your services to be reachable from the Internet.
In the first case (service to Internet), we need an egress network, and in the second case, an ingress network will help us expose our services.
Ingress is a Kubernetes networking resources that manages external access to the services in a cluster, typically HTTPS. It provides other mechanisms to handle this type of traffic, like load balancing, name-based routing, and SSL termination. An ingress network needs an ingress controller; otherwise, there is no effect on your cluster.
Usually, cloud providers implement their own ingress controller (e.g., AWS ALB Ingress Controller), but you may also use some other tools like:
- Ambassador
- Contour
- Gloo
- HAProxy Ingress Controller for Kubernetes
- Istio
- Kong
- Traefik
- NGINX Ingress Controller for Kubernetes
On the other hand, the egress network manages the outbound traffic from pods (e.g., AWS VPC Internet Gateway).
It is important to understand that both ingress and egress networks can be configured and secured using Kubernetes network policies.
Conclusion
We have seen how containers inside the same pods communicate with one another, how a pod can reach other pods in the same node, and how services work to ensure pod-to-service communication. And the final networking challenge in this blog post is about Internet-to-service networking.
Some of the ideas in this blog post may seem not directly related to Docker networking, but they are important to understand in order to get the whole picture.
Containers are, most importantly, an isolation mechanism, and this isolation operates at the networking level. Kubernetes, at the same time, extends the isolation mechanisms and applies them to a wider context starting with containers runtimes, through pods namespaces, up to routing inbound and outbound traffic.
The concepts introduced in this third part of this blog series may be more complex than part I and part II, but all of them are required to get a deep understanding of how stand-alone and managed containers' networking works.
If you're interested in monitoring your Kubernetes, check out our great guide to Monitoring Kubernetes with Prometheus using the four golden rules of observability.