SAP on Google Cloud: HANA HDI containers and CI/CD pipelines (pt.2)
We built an application with managed containers using Node.js and Golang for the frontend and backend modules and HDI containers on SAP HANA. We defined a CI/CD pipeline to keep them consistent.
In this blog post, we’ll explain what the CI/CD pipeline looks like and how it incorporated the concept of HDI containers. This follows part 1. Here’s the video with this same walk-through in SAP Online Track.
As you see in the boxes inside of boxes above, our application has three micro-services:
- A frontend, a web application written in Node.js
- A backend application, written in Golang, that also talks to the translate API
- A database access layer responsible for talking to SAP HANA using the Node.js client for SAP HANA
You can create this app yourself, hopefully for free and without swiping a credit card. We published this Qwiklab that will spin up a HANA Express machine on Google Cloud for you while you complete the lab. The initial free Qwiklab credits should be enough to run this lab. I’d recommend you clone the app into your own GitHub so you can use it afterwards.
I think the pipeline is better understood when in action. So let’s say we are a group of developers working on the different microservices that comprise our application.
Here’s how the tooling overlords would govern our day (or how we govern them, we’ll see…)
Shared Git Repository
We chose Google Cloud Source as a private git repository. This repo and its master branch were “born” with the application.
You know what else was born with the first deployment? An HDI container!
Creating the HDI container
Just like we have a branch that acts as the main branch, we’ll have an HDI container that will act as the reference to all and will be loaded with test data.
Here’s an example of how this first container was created using the hana-cli.
Here’s the asciinema if you want to follow along/ copy and paste.
The default-env.json file now contains credentials to this container. The Node.js library hdi-deploy will look for a file with this name or the environment variable VCAP_SERVICES to connect to SAP HANA and deploy our new tables and other artifacts into the HDI container.
We will use the contents in this file to create an environment variable called VCAP_SERVICES later.
Starting a change
For the sake of simplicity, let’s imagine we need to add an additional column to the existin gtable in the HDI container.
I’ll make sure I pull all the changes from the main branch. This will bring the latest artifacts that have been deployed into the main HDI container too.
Finally, I’ll create my own branch:
Just like I have created my own branch, I will create my own version of an HDI container using the deployment files that I just pulled from the main branch.
This time, I will append a marker (_LS in the example below) to the name of my container, so that it does not conflict with the master one (Web IDE does this automagically).
hana-cli connect -s hana-cli createContainer -c RUN_LS -e -s
Now I have the new credentials for this container in a new .json file. I will go ahead and make a change to the table I pulled:
Using the new credentials in the default-env.json file, the ones for my RUN_LS container, I will deploy this change. The default-env.json file should be in the /db folder, from where we run these commands:
npm install npm start
This calls the hdi-deploy module so it can create an HDI container and create a schema with my tables.
Since our credentials are not meant to leave our local environment, let’s make sure the .gitignore file includes these default.env*.json files.
And now let’s commit+push before each coffee break:
We should now see the change in my Git branch in Google Cloud Source:
Merging a change
We have finished making some changes and doing some quick tests on our own container. We have committed those changes (multiple times) into our own branch. Now it’s time to submit them into the main branch.
Just in case the other developers working on this are also making changes, I am also pulling the branch again.
And after we have merged the changes, the main branch is now showing them… as expected:
Automated deployment and testing
One of the key ingredients of our “deploy often, fail fast” recipe is automation. When do we want it? As soon as we push into the main branch. Yes, like we just did.
We want to make sure everything that used to work still works but we need the changes deployed into the main container first.
We also want to make sure that our new tables and artifacts are tested when someone else deploys. This means that our new tables also need to be incorporated into the automated tests, so we’ll build those into the pipeline too.
We’ll need some SAP HANA specific tools: hana-cli, hdbsql and the HANA client for Node.js if we want to do some tests like the ones documented here.
We can pre-install all of these in a Docker container, and have that container do the deployment and the testing. We’ll cover this in a future blog post, but here is a sneak peek of what this looks like:
The tool we are using to coordinate this is Cloud Build. We are using a trigger to start the build and test process every time there is a push into the main branch:
Using the docker container and injecting the VCAP_SERVICES as an environment variable, so that the Node.js deployer can do its magic with the main HDI container, RUN:
There is a better way to pass these credentials, using secrets. But we’ll keep that one for the upcoming blog posts.
For details on how to build the pipeline for the rest of the micro-services and some tweaks towards using this in a productive environment, here is part 3.
(Originally posted in medium.com)