I remember back in late 2013 my Twitter feed being flooded with articles on Docker. At the time I cared little about DevOps, and knew even less about containers. But I liked their logo, so I decided to take a look – and was blown away by the possibilities it opened.
A year later Google would announce Kubernetes (K8S), an open-source version of Borg, a cluster management system they had been using internally for around a decade. This provided a way to deploy, monitor and manage containers at scale.
Since then, the aptly-named Kubernetes (the name means “governor” in Greek) has become the default in container orchestration systems, backed by some of the largest companies in the industry, with a vibrant ecosystem of open source and commercial systems built around and on top of it. What about the industry landscape however? What impact will it have and will K8S revolutionise the traditional enterprise data centre altogether?
To tackle these questions, we first need a basic understanding of both the underlying technology, and enterprise-specific challenges to tackle.
What are containers and why do they matter?
To put it simply, a container is a piece of software that packages code and its dependencies (system tools, runtime, libraries, binaries…) and runs it on a host machine’s OS kernel in an isolated environment.
This offers several benefits, for example:
– Portability: containerised software can run on any infrastructure consistently. Tired of hearing the old “but it works on my laptop!” excuse for production bugs? This goes a long way to solve it, and it is one of the pillars of the modern Cloud.
– Resource Efficiency & Speed: one of they key features of containers is that unlike VMs (virtual machines), they don’t virtualise the hardware; rather, just the OS, allowing multiple containers to share OS resources. Essentially this means many more containers can be run simultaneously on the same machine, considerably lowering costs as a consequence. At the same time containers are very fast to start up. If you’ve ever heard of the “serverless” buzzword, this is what makes it at all possible.
While containers on their own bring a lot to the table, the industry-changing benefits only become apparent when one takes the next logical step: container orchestration. And this is exactly where K8S comes in.
What is container orchestration and why is it important?
Modern applications, at least significantly large ones, are often no longer monoliths; rather, they consist of several loosely coupled components that need to communicate and work in tandem. For example, an app might use a service for authentication, another for ingesting data from social media streams, and yet another for serving an analytics dashboard. Such services can be run in separate containers, allowing developers to release, deploy and scale such services independently. This offers a nice separation of concerns, enabling faster release cycles for key components, as well as efficient resource allocation.
Container orchestration solves several challenges that arise from such an architecture. For example:
– Automated deployment and replication of containers
– Load balancing
– Rolling updates (updating containerised apps with no downtime)
– High Availability: when a container fails, its replicas continue to provide service
– Self-healing: restart failing containers, kill containers that fail to respond, replace containers when a node dies
– Secure communication between containers
It should be pretty obvious that the aforementioned features are among the pillars of the modern Cloud, and party explain why K8S is nowadays ubiquitous. It is equally easy to see why this approach is a perfect fit for stateless apps in particular. But what about the particular needs of big enterprise systems?
Tackling Challenges in the Enterprise
Managing State: Let’s address the elephant in the room straight away. While not an enterprise-specific challenge, it is an important one. Stateful applications like databases, caches and message queues, face challenges in terms of portability, since state needs to be maintained whenever a container starts, stops, or is replicated. This is particularly challenging in a distributed or even multi-cloud environment.
K8S tries to address this mainly through Volumes, Persistent Volumes and StatefulSets. In practice, all these options are great to have and cover many scenarios; but for the time being there are still many that they do not, and the sheer complexity of containerising stateful apps in general often outweighs the benefits in production scenarios. The question of managing storage and containerising stateful apps is a very hot topic, and there’s a lot of effort being put in this direction in the industry (e.g. Ceph, Rook, KubeDirector, KubeDB, RedHat’s Operator Framework)
Security: this is a big deal in the enterprise world. Despite their many advantages, containers do not offer the same level of isolation as VMs. Multi-tenancy in particular can be a challenge. Again, there is a lot of effort being put into making containers more secure; a great example being Google open-sourcing gVisor in a bid to bring better isolation to containers – and it integrates nicely with K8S.
High Performance Computing (HPC): enterprise data centres typically run a variety of workloads on different types of servers, for example GPU machines meant to run intensive compute operations like ML/AI pipelines. To address this, K8S uses taints and tolerations to ensure that pods are scheduled into appropriate nodes(the physical or virtual machines where containers run). This approach essentially allows workloads to run on the appropriate infrastructure and can be of use in other cases, for example running workloads on machines within a DMZ.
Multi-cloud: enabling hybrid / cloud deployments and avoiding vendor lock-in are key requirements for the modern enterprise. This poses significant technical challenges which cannot be addressed by a simple tool, but typically require a combination of technologies and architectural approaches. This is one of the reasons why we’ve seen a growth in enterprise K8S offerings, for example OpenShift, Docker Enterprise and Google’s Anthos
An Open Source Success Story
K8S is today one of the top open source projects, operating under the Cloud Native Computing Foundation (CNCF), itself part of the Linux Foundation. The CNCF acts as an umbrella organisation to K8S and is backed by some of the largest companies in the industry, such Apple, Microsoft, Google, Amazon, SAP, Oracle and many others. As a result, a vast ecosystem of open source technologies has evolved around K8S. This includes a wide variety of technologies from monitoring solutions like Prometheus and container runtimes like Containerd to package managers like Helm. Many of the biggest players in the industry are thus incentivised to both take part in shaping the future of cloud computing by contributing to the CNCF, and in turn leverage the ecosystem for their commercial offerings.
Examples In The Enterprise
At SAP, we face many of the aforementioned enterprise challenges daily – for example the need for large-scale, heterogenous multi-cloud and hybrid solutions. This is where Gardener comes in: an open source project offering operation of K8S clusters as a service, on various cloud providers, at scale. Internally, SAP’s business platform, SAP Cloud Platform leverages Gardener to provision K8S clusters to its customers.
In similar vein, SAP developed the open source Kyma project on top of K8S to extend and customise cloud and on-premise enterprise applications. The vision for Kyma is to act as the glue between Mode 1 and Mode 2 environments, essentially allowing users to extend their Mode 1 environment with Mode 2 capabilities, without disrupting the existing Mode 1 systems.
We already discussed challenges specific to the enterprise and what options K8S and its ecosystem employ to address them. As takeaway, there are a few points worth reiterating:
– containers are not the answer to everything; but K8S is probably the default way to manage containerised systems at present
– container tech is constantly evolving and a lot of effort is being put into overcoming current challenges (e.g. containerising databases)
– as the K8S ecosystem evolves, a positive feedback loop emerges resulting in increasingly sophisticated technology
– most big tech players are directly involved in enriching the ecosystem and building commercial offerings on top of it
Considering the aforementioned points, it is clear we are currently at an exciting point in the evolution of cloud native technologies. As for Kubernetes, it is increasingly making headway into the enterprise world. It will be interesting to see how this growth continues and how the ecosystem will adapt and evolve in turn.