Cloud Native with Containers and Kubernetes – Part 2
Part 1 can be found here…
Containers and Kubernetes
So, what does container packaged mean and what value does it add? Containers – you might have also seen or heard the term “operating system level virtualization” – is a virtualization technology that, although it is not brand new, has gained a lot of popularity over recent years.
You might have also heard of Docker, a software which helps create container images and can run them in different environments, for example, on your local machine or on a server running in a datacenter. A container image should contain, in contrast to a virtual machine, only the executable aspects of the application and all the libraries and tools required by the application, thus rigorously enhancing the security footprint of the contained software. This container image is then deployed onto a machine with a container engine that has the job of running the containers.
Containers can be used for various types of applications: micro-services, 12 factor apps, Function-as-a-Service/serverless computing, or any software that runs in operating system processes, including even stateful applications and brownfield monoliths.
Comparison of virtual machines versus containers
Compared to other virtualization concepts, containers have several advantages, including:
- They have only minimal overhead: In a virtual machine, you usually have to install a full operating system, which consumes extra resources. In contrast, a container image contains only the application and the libraries that are not already installed on the host. Therefore, you can have more containers running on a given host than virtual machines.
- They are loosely coupled with the operating system of the host they are running on, which means installing a containerized application is quite simple and removing a container can also be done easily without leaving any traces on the underlying server.
- Handling containers is quite dynamic, and starting and stopping them takes only a few seconds. Compared to a virtual machine – which usually takes much longer (several minutes) to start and stop – this is a great advantage. This makes containers an ideal technology for applications that require fast scalability, a make-or-break criterion for serverless computing!
If you’re building your application based on a micro-services architecture, you can imagine that one container alone will not do the job. Your application will have dependencies on a number of services, either built by you specifically for your scenario or built and run by someone else. Therefore, you need a set of containers that belong together and all of which need to be available for your application to run.
Furthermore, as already stated by CNCF, these containers need to be dynamically managed. That’s where a container management solution comes into the game. With most container management solutions, the atomic unit is not a single container, but a “pod,” a collection of containers. The pod concept introduces new design patterns, which enable exceptional innovations such as the Service Mesh.
Just as Docker is not the only player in the container market, there are also many container management software offerings out there, including Apache Mesos, Docker Swarm, and Kubernetes. Currently, Kubernetes has the largest community and is therefore the best-known container management system. It has taken the lead not only thanks to its technical excellence, but also by convincing and merging diverse community interests.
What does a container management solution do? Again, let’s have a look what the Internet has to say, in this case, the Kubernetes homepage:
“Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.”
In essence, Kubernetes will help you run containers in a cluster environment by taking care of, for example:
- Deploying containers onto servers (known as nodes) with appropriate resources (such as CPU and memory)
- Monitoring them to ensure that they are available and – in the case of a failure – restart a container
- Updating containers following changes in the container image (security patches, application updates)
- Autoscaling sets of pods (and nodes) with the help of monitored metric triggers
Kubernetes consists of a control plane that manages the cluster itself and the worker nodes on which the applications and services are running. In Kubernetes, all the entities or objects under its command are managed via a uniform and extensible API and are observed and controlled using active controllers. It is this pattern that allows for the shift from imperative to declarative logic and why Kuberenetes is on the path to become the lingua franca for cloud native software.
You can find more details about containers and container management on the Kubernetes documentation website.
In part 3 of this little blog series, you will learn more about project “Gardener”, SAP’s contribution to the open source community.