Skip to Content
Business Trends
Author's profile photo Kosmas Pouianou

Kubernetes in the Enterprise

I remember back in late 2013 my Twitter feed being flooded with articles on Docker. At the time I cared little about DevOps, and knew even less about containers. But I liked their logo, so I decided to take a look – and was blown away by the possibilities it opened.

A year later Google would announce Kubernetes (K8S), an open-source version of Borg, a cluster management system they had been using internally for around a decade. This provided a way to deploy, monitor and manage containers at scale.

Since then, the aptly-named Kubernetes (the name means “governor” in Greek) has become the default in container orchestration systems, backed by some of the largest companies in the industry, with a vibrant ecosystem of open source and commercial systems built around and on top of it. What about the industry landscape however? What impact will it have and will K8S revolutionise the traditional enterprise data centre altogether?

To tackle these questions, we first need a basic understanding of both the underlying technology, and enterprise-specific challenges to tackle.

What are containers and why do they matter?

To put it simply, a container is a piece of software that packages code and its dependencies (system tools, runtime, libraries, binaries…) and runs it on a host machine’s OS kernel in an isolated environment.


This offers several benefits, for example:
– Portability: containerised software can run on any infrastructure consistently. Tired of hearing the old “but it works on my laptop!” excuse for production bugs? This goes a long way to solve it, and it is one of the pillars of the modern Cloud.
– Resource Efficiency & Speed: one of they key features of containers is that unlike VMs (virtual machines), they don’t virtualise the hardware; rather, just the OS, allowing multiple containers to share OS resources. Essentially this means many more containers can be run simultaneously on the same machine, considerably lowering costs as a consequence. At the same time containers are very fast to start up. If you’ve ever heard of the “serverless” buzzword, this is what makes it at all possible.

While containers on their own bring a lot to the table, the industry-changing benefits only become apparent when one takes the next logical step: container orchestration. And this is exactly where K8S comes in.

What is container orchestration and why is it important?

Modern applications, at least significantly large ones, are often no longer monoliths; rather, they consist of several loosely coupled components that need to communicate and work in tandem. For example, an app might use a service for authentication, another for ingesting data from social media streams, and yet another for serving an analytics dashboard. Such services can be run in separate containers, allowing developers to release, deploy and scale such services independently. This offers a nice separation of concerns, enabling faster release cycles for key components, as well as efficient resource allocation.


Container orchestration solves several challenges that arise from such an architecture. For example:
– Automated deployment and replication of containers
– Load balancing
– Rolling updates (updating containerised apps with no downtime)
– High Availability: when a container fails, its replicas continue to provide service
– Self-healing: restart failing containers, kill containers that fail to respond, replace containers when a node dies
– Secure communication between containers

It should be pretty obvious that the aforementioned features are among the pillars of the modern Cloud, and party explain why K8S is nowadays ubiquitous. It is equally easy to see why this approach is a perfect fit for stateless apps in particular. But what about the particular needs of big enterprise systems?

Tackling Challenges in the Enterprise

Managing State: Let’s address the elephant in the room straight away. While not an enterprise-specific challenge, it is an important one. Stateful applications like databases, caches and message queues, face challenges in terms of portability, since state needs to be maintained whenever a container starts, stops, or is replicated. This is particularly challenging in a distributed or even multi-cloud environment.

K8S tries to address this mainly through Volumes, Persistent Volumes and StatefulSets. In practice, all these options are great to have and cover many scenarios; but for the time being there are still many that they do not, and the sheer complexity of containerising stateful apps in general often outweighs the benefits in production scenarios. The question of managing storage and containerising stateful apps is a very hot topic, and there’s a lot of effort being put in this direction in the industry (e.g. Ceph, Rook, KubeDirector, KubeDB, RedHat’s Operator Framework)

Security: this is a big deal in the enterprise world. Despite their many advantages, containers do not offer the same level of isolation as VMs. Multi-tenancy in particular can be a challenge. Again, there is a lot of effort being put into making containers more secure; a great example being Google open-sourcing gVisor in a bid to bring better isolation to containers – and it integrates nicely with K8S.

High Performance Computing (HPC): enterprise data centres typically run a variety of workloads on different types of servers, for example GPU machines meant to run intensive compute operations like ML/AI pipelines. To address this, K8S uses taints and tolerations to ensure that pods are scheduled into appropriate nodes(the physical or virtual machines where containers run). This approach essentially allows workloads to run on the appropriate infrastructure and can be of use in other cases, for example running workloads on machines within a DMZ.


Multi-cloud: enabling hybrid / cloud deployments and avoiding vendor lock-in are key requirements for the modern enterprise. This poses significant technical challenges which cannot be addressed by a simple tool, but typically require a combination of technologies and architectural approaches. This is one of the reasons why we’ve seen a growth in enterprise K8S offerings, for example OpenShift, Docker Enterprise and Google’s Anthos

An Open Source Success Story

K8S is today one of the top open source projects, operating under the Cloud Native Computing Foundation (CNCF), itself part of the Linux Foundation. The CNCF acts as an umbrella organisation to K8S and is backed by some of the largest companies in the industry, such Apple, Microsoft, Google, Amazon, SAP, Oracle and many others. As a result, a vast ecosystem of open source technologies has evolved around K8S. This includes a wide variety of technologies from monitoring solutions like Prometheus and container runtimes like Containerd to package managers like Helm. Many of the biggest players in the industry are thus incentivised to both take part in shaping the future of cloud computing by contributing to the CNCF, and in turn leverage the ecosystem for their commercial offerings.

Examples In The Enterprise

At SAP, we face many of the aforementioned enterprise challenges daily – for example the need for large-scale, heterogenous multi-cloud and hybrid solutions. This is where Gardener comes in: an open source project offering operation of K8S clusters as a service, on various cloud providers, at scale. Internally, SAP’s business platform, SAP Cloud Platform leverages Gardener to provision K8S clusters to its customers.


In similar vein, SAP developed the open source Kyma project on top of K8S to extend and customise cloud and on-premise enterprise applications. The vision for Kyma is to act as the glue between Mode 1 and Mode 2 environments, essentially allowing users to extend their Mode 1 environment with Mode 2 capabilities, without disrupting the existing Mode 1 systems.

Finally, in terms of commercial offerings SAP leverages K8S for its Data Intelligence cloud service, as well as it’s on-premise enterprise data orchestration solution, DataHub.

Moving Forward

We already discussed challenges specific to the enterprise and what options K8S and its ecosystem employ to address them. As takeaway, there are a few points worth reiterating:
– containers are not the answer to everything; but K8S is probably the default way to manage containerised systems at present
– container tech is constantly evolving and a lot of effort is being put into overcoming current challenges (e.g. containerising databases)
– as the K8S ecosystem evolves, a positive feedback loop emerges resulting in increasingly sophisticated technology
– most big tech players are directly involved in enriching the ecosystem and building commercial offerings on top of it

Considering the aforementioned points, it is clear we are currently at an exciting point in the evolution of cloud native technologies. As for Kubernetes, it is increasingly making headway into the enterprise world. It will be interesting to see how this growth continues and how the ecosystem will adapt and evolve in turn.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Marius Obert
      Marius Obert

      Nice post, Kosmas! I especially like the "containers are not the answer to everything" note, I often believe that people forget this 🙂


      Btw: The Greek meaning of the word is "Steersman". I heard that it could be interpreted as a Latin word as well.

      Author's profile photo Vineet Gupta
      Vineet Gupta

      Is SAP Analytics Cloud using K8S?

      Author's profile photo Witalij Rudnicki
      Witalij Rudnicki

      You mentioned "Twitter feed" in your opening sentence, but I could not find you on Twitter ?

      Author's profile photo Kosmas Pouianou
      Kosmas Pouianou
      Blog Post Author

      Hi Witalij - I've not been using Twitter for the past couple of years, though I consider giving it another chance at some point!

      Author's profile photo Roland Kramer
      Roland Kramer

      Hello @Kosmas Pouianou,

      First of all, thanks's for bringing more light to the Kubernetes topic.

      see how the Azure Kubernetes Services (AKS) hosting SAP DataHub 2.6 - prepare the Jump Server for the SLC Bridge

      this is the foundation to run the SAP DataHub Implementation ...

      Best Regards Roland

      Author's profile photo Adrian Lawrence
      Adrian Lawrence

      Interested to know how OpenShift, Docker Enterprise and Google’s Anthos compare to each other?

      Author's profile photo Ankit Baphna
      Ankit Baphna

      Excellent document. Does existing already built applications(with customers) has to go through any architecture changes if they have to run on K8S cluster?

      Author's profile photo maxx currey
      maxx currey

      My friends ask me why as an oldster I use Python, Github, open source, VMs, Kubernetes, etc. and indeed "like" them.

      My answer is easy, I programmed on IBM's VM/CMS for about five years.  (Now called z/VM).

      The "shell language" on IBM's VM/CMS was Rexx which is almost a twin of Python.

      There was a 1970s "open source" world-wide group called "VMWARE" which was a global villiage well before the web.

      In IBM's VM/CMS "Every" userid had their very own virtual machine, the virtual machine could be different OSs, e.g. MS/DOS, Unix, MVS, AS/400, Mainframe DOS, VM/CMS itself, etc.
      IOW whatever the hardware could support.

      The VMs did not have final control, it was controlled by the "VM/CP" (control program) which was the "Kubernetes" handling "containers". This was why the VMs would run faster as a "guest operating system" than they would alone on native hardware often as not.  Same stuff, different names.

      IBM's VM/CMS was copied in a limited way by VMWARE since the origional source code was available as open source in the public domain, available for all to use.

      IBM's VM/CMS started at the same time on the same government-funded project as the twin brother of Unix (or vice versa depending upon your prejudice).

      The above is much the same as Kubernetes, Containers etc. This stuff has been around since the mid-1960s and of course it is much easier and better today, but we need to remember those
      from almost 60 years ago who were doing the same, as today we do in 'the cloud', but they did it with primitive hardware, albeit with a sturdier and more inexpensive telephone system than today.

      They also wrote enough code to put humans on the moon and return them alive.