Skip to Content
Technical Articles

SAP on Google Cloud: When SAP Developers join the cloud (pt. 1)

During the first SAP Online Track, Fatima Silveira and I had the privilege of sharing our newest experiments with extensions on containers, SAP HANA, HDI containers and CI/CD pipelines.

This blog post provides some preliminary context and is the first of a series showing what and HOW we did it, with notes on how to implement this in a real-life scenario.

When SAP joins the cloud

This adventure started when a team of seasoned cloud-native developers heard the SAP workload was joining the cloud. This team already had a set of established practices for cloud-native development. They knew SAP is queen in business processes and wanted to invite her to the party.

They knew they could bring a lot of innovation to the business, they just needed to get past the acronyms in the SAP world first.

The first acronym – HDI for the win

Together with SAP’s take on Cloud Foundry, in the SAP HANA world, came HANA Deployment Infrastructure containers. Despite being presented together, HDI containers do not need the XS Advanced or SAP Cloud Platform layer to exist in a HANA database.

Thomas Jung  made this very easy to understand with a practical approach in this blog post.

Source: help.sap.com

HDI containers have been a source of heated disputes at family dinners. Without understanding the benefits they bring, it’s hard to make the case to migrate an existing codebase written in XS Classic in a sidecar HANA. You have your calculation views feeding your reporting tool and it works like a charm, then why bother?

 

Here is one reason why: HDI containers allow for branching and isolation. We’ll see this in practice in our pipeline. 

The second acronym – CI/CD

Whoa… This one was not coined by SAP but defining this acronym can quickly turn into a debate. As a refresher:

  • Continuous integration: There is a shared repo (e.g., GitHub) to which developers merge their code. This merging process triggers automated code checks and tests to make sure nothing is broken by the new changes. If the tests pass, the whole thing is bundled to be deployed. 
  • Continuous delivery: Act of manually triggering the deployment of artifacts. For example, the changes have been approved and can be pushed into production. This standardized workflow is not too different from what most of us do to move a transport request into a productive Netweaver installation. The ability to rollback a change is also relevant here.
  • Continuous deployment: Same as delivery, but the deployment into production is automated. 

The main point here is that we want to make small, incremental changes to add functionality to our code and release it as often as possible. We will use a series of tools to check our code, test it and release it. 

If you’d like to understand more, here are some introductory resources to bookmark:

Wait… What about DevOps?

Well, DevOps goes beyond CI/CD and includes concepts like Infrastructure as Code which deserve their own full chapter. The one principle I’d take from DevOps culture is automation and focusing on a faster feedback loop.

 

Cloud Run – not an acronym, still important

I have loved Cloud Run since it was first announced because it allows you to deploy managed containers that autoscale and serve HTTPS endpoints without knowing much about containers at all. 

The magic behind this is Knative, an open source project started by Google and contributed to by SAP, IBM, Pivotal and others. Knative is about simplifying serving and eventing in an application deployed in Kubernetes. 

Why is it magic? It converts Kubernetes containers into serverless applications.

Knative%20Istio%20and%20Kubernetes

To understand what Knative does, think of what you would need to manually deal with if Knative wasn’t serving the application above:

  • Start with defining services and deployments for Kubernetes 
  • Routing and balancing of incoming traffic to new revisions of an application
  • Automatically increasing or decreasing the number of replicas based on incoming requests (i.e., auto-scaling, even to zero)
  • Keeping a stable endpoint the other apps or users can connect to (even after you deploy a new version of the container)

Now that we have established some context, let’s roll up our sleeves and dive into details.

 

Check out part 2 for details on how we set up HDI in our CI/CD pipeline.

 

Lucia Subatin and Fatima Silveira

 

(Originally posted in Medium.com )

2 Comments
You must be Logged on to comment or reply to a post.