In this post we will talk about Kubernetes Containers. Kubernetes is a container management solution and before we get into the details of what Kubernetes is and how it works, it will be good to briefly understand what containers are, how they are different from Virtual Machines and why the software world is moving from servers to containers for application deployment.
what is a Kubernetes container?
Container is a software product that bundles up an application and all its dependencies into a single software unit. This software unit performs OS level virtualization, that is, it presents an isolated environment on top of an operating system. This isolated environment shares the kernel of the operating system but provides an environment that is isolated in terms of process, network and file system mount. It packages code and dependencies together and creates an image from it. This image is used to create containers at runtime. These containers are isolated from each other and live in their own environment.
To explain it with an example, think of a simple spring boot based Restful API application.
This application consists of a jar file, dependent libraries, java JRE and an application server such as apache tomcat. The application, its dependencies, java and the application server is packaged into a file known as a docker image. This file is stored in a docker repository. To run this image, we need to install docker on a machine. We then download the image and run it on docker. This software unit that is running in docker is known as ‘container’. We can run multiple containers on the same docker platform. All containers share the operating system but do not share the pid, network and file mount.
A docker image is created by writing a dockerfile that includes a list of commands that add various layers to create the final docker image.
In our spring boot example, we have a layer that contains our application, another layer that contains java and another layer for the application server. Docker combines the layers to form the image. Note that the docker platform stores only a single version of a layer, so if multiple containers use java, they would share the same java layer.
Containers accomplish isolation with the help of unix cgroups and namespaces.
Kubernetes container Namespaces and cgroups
Linux provides namespaces that allows a process to see only resources that are part of that namespaces. There are 7 different kinds of namespaces, but lets look at a few.
The first namespace is the Process ID or pid. This allows processes to have an independent set of process ids that are not visible to other processes. Containers use this to ensure that one container cannot see the processes of another namespace or even the host.
The second namespace is network. This allows each namespace to have its own IP addresses, routing tables, firewall and other network related resources.
The third namespace is mount and this control filesystem mount points.
The fourth namespace is ipc and this manages inter process communication.
The fifth namespace is uts or unix timesharing system and this isolates the kernel and versions.
In addition to namespaces, containers utilize another characteristic of linux kernel known as cgroups. Cgroups limits the CPU, memory, disk io and other resource usages for a process or collection of processes.
A container is different from a Virtual machine. Lets take a look at the layers of a virtual machine.
Docker Vs Virtual Machines
The base layer is the hardware or the host machine. On top of the host machine is a Hypervisor. A hypervisor is software, firmware or hardware that presents a virtual operating platform. It hides the hardware from the guest operating system but the guest operating system executes as if it were directly running on the hardware. However, access to the physical resources such as network or drives is restricted.
Each hypervisor can hosts one or more virtual machines. Each virtual machine has its own guest operating system, which is a fully blown operating system. Therefore, if you have two virtual machines running Ubuntu, you would have two Ubuntu operating systems sending instructions to the host machine through the hypervisor. Both operating systems would need enough memory to run a full blown operating sytem.
Next, lets see how containers look. Our base layer is the host system or the hardware. On top of the hardware we have the operating system. Docker supports most versions of Linux and also supports Windows Server. On top of the Operating system we install the Docker engine. The docker engine is responsible for running the containers. Each container that you start uses the docker engine for its kernel functions. The docker engine is powered by the containerd runtime which is responsible for maintaining the container lifestyle and providing a container runtime. It also implements the Kubernetes Runtime Interface (CRI), so it can be used by Kubernetes as well.
This finishes our introduction to Kubernetes containers. In the next tutorial lets look at why the world is moving from servers to containers.