Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
nunomcpereira
Explorer
The purpose of these blog series is to describe possible approaches for SAP Cloud Integration (aka CPI) CI/CD addressing some of what I consider pitfalls or limitations. If you're aware of SAP standard internal mechanisms to deal with it just let me know. Each of the topics below will be linked when the blog part is available. After each section I try to highlight the motivation and value added from each feature/development done.

Just a disclaimer, each observation I make on these series is my personal opinion and of course is very debatable, so I encourage you to comment on alternatives or possible issues with the approaches I took.

When I first joined my current employer, we we're just starting to use SAP CPI, so I had the chance to influence, propose and implement some ideas in regards to CI/CD and also to establish some ground for future processes. Nowadays we have around 400 integration flows running so there's definitely some governance needed.

Some context information:

  • Our company uses 4 system landscape (dev, test, preprod and prod)

  • We use cloud foundry instances only (no neo)


I'll now present how we addressed the following topics:

Each one of these areas will be an in-depth explanation with most of the steps to help you creating a similar platform. When we started this, project piper was still on early stages, from my understanding it would only run on linux and we had a windows VM. There was always the option to use Docker, but using Docker totally free would mean run it without Docker Desktop which we assumed (perhaps wrongly) that would be a big effort to configure it so despite we use some of the same platforms as used in piper (like Jenkins, we're not using the official project piper implementation)

We started this initiative as kind of a PoC to play a bit with CI/CD and see how could we benefit from it, therefore we tried to stick to open source and no costs whenever possible. Crucible came later to the game and we decided that the licensing cost was totally worth it so we ordered it.

  • Jenkins - We can think of Jenkins as just a scheduler, a trigger for our automations

  • Gitea - Our source control repository on premise. If you don't have any issues to store your code on cloud, maybe github is a better option for you since it supports github actions

  • Crucible - A tool from Atlassian to allow you to create and implement code reviews on your source code


Special thanks to antonio_jorge_vaz, who contributed to this solution on most of these topics.

Backup Binaries and Source Code


Cloud integration is a great tool that allows you to create integrations using a simple UI. It has the concept of versions, but those versions are freely created by developers using freely naming convention. While this is great in regards to flexibility is not that great in regards to consistency since each developer will follow their own version convention. We decided to use the semver model but even then, there's no impediment for a developer to create the same semver all over again, or even worse which is creating outdated semver versioning creating a big mess in the end. On top of that, all of your code is only "saved" on SAP servers, so if you delete your integration suite tenant by mistake, there's no way for you to recover any of your work (trust me, I've been through that...).

Our landscape has currently 4 environments (DEV, TEST, PREPROD and PROD).

So since my early days of joining that I wanted to have backups for our binaries (packages) as well as for the iflows source code. We've asked for a new on premise server (windows) and we installed on this server:

  • Jenkins

  • Gitea

  • Crucible



Main general architecture



Binaries Backup


Idea was to have a pipeline scheduled on Jenkins and synchronize on a daily basis (per environment). The code uses CPI API to retrieve all packages binaries and stores them into git. What we realized then was that if a package has iflows in Draft status, you can't download the full package, so we went to download every binary possible for the package (at the time we developed only value mappings and integration flows were supported).


Package binary file stored on git as a zip file



Iflows and value mappings zip files inside the package


As the end result, all our CPI binaries would be stored as zip files inside git. The advantage is that if you need to restore it would be quite easy to import the zip files again. The disadvantage is that for source control itself and history tracking of changes, binary files are not optimal for analysis. Therefore we thought that it would be interesting to have not a single backup pipeline for everything but also a pipeline per cpi package running package specific checks and logic.

Next step was in regards to security: we wanted to be able to versioning the keystore and security materials, so using CPI API we were able to download all the certificates from all environments and also to synchronize all our security materials with a keepass file. We then "gitted" all of these on our binaries repository per environment.

Some technical details


In order to save some disk space we only keep track of the latest 5 builds. We run it daily at 2 AM. For confidentiality reasons I changed our internal urls into dummy one, but you get the idea


Jenkins pipeline details



Jenkins pipeline details


We're basically instructing Jenkins to execute file admin_SAPCPIBackup_DEV that is located in a Jenkins repository stored inside git. This file contains the instructions to use a coded pipeline with multiple stages.
import groovy.json.JsonSlurper
def GitBranch = "master"
def GitComment = "Backup"
def GitFolder = "IntegrationPackages"
def CertificatesFolder = "Keystore"
def packageResponsible = "testdummy@domain.com"

pipeline {
agent any

options {
skipDefaultCheckout()
}

stages {
stage('Download integration artefacts and store it in git for CPI DEV') {
steps {
script {

We based our code on the following basic recipe from axel.albrecht (@axel.albrecht)

Link to the complete blog from him here

Value added


Easy restore in case we lose an instance

Synchronization of packages to pipelines


Next logical step would be to find a way to automatically create Jenkins pipelines for each package and make them execute similar checks. Each of our pipeline would reference a git repository and their respective JenkinsFile inside it containing the logic to run for that package..

So idea of this job was to pick all cpi package list and check if a new jenkinsFile would need to be created on our git and a respective jenkins pipeline with the same name of the cpi package.


Sync all packages on Cloud Integration creating a pipeline per package


<Package specific pipeline>


Pipeline configuration retrieving the code to execute from a git repository


All our generated jenkinsFile followed the same structure which was:

  • Backing up the source code for that particular cpi package into a git repository. If no git repository was yet created, a new one was created using gitea API

  • Create a crucible repository, connect it to the gitea git repository, create a crucible project and a crucible code review to make sure we keep track of unreviewed files

  • Crawl through the source iflows trying to find MessageMappings. When found, submit them to an extraction tool which would generate a cool html report with syntax coloring for mappings done there (this would serve for documentation purposes)

  • Automatic markdown generation for git repository containing all package information, all iflows there, description per iflows, screenshot of the iflow as well as the message mapping documentation table. Example:

  • Running CPI Lint (discussed in detail later)

  • Automatically running unit testing for regular groovy files of your iflows, xspec for xslt unit testing as well as unit testing for message mappings (discussed in detail later)

  • Email notifications to the package responsible in case of any issues found on any steps above


The Jenkinsfile's were generated based on a template containing placeholders that was just replaced when the synchronization job is running. In case the template change since we want to introduce a new functionality, all we need to do is to delete all the jenkins file from git and this job would be able to regenerate them and commit them to git again.

Value added


if some developer started working on a new cpi package, without he even noticing, there is a job collecting the source code and binaries for his new cpi package and starting to store this on our git repository. If for instance the source code was not following our development guidelines the responsible would be notified to check it.

Package Responsible


You may realize that for each package we identify a package responsible. This is needed since we want to have the concept of ownership and responsibility. Ultimately it's the goal of the team to make sure all packages are ok, but if we have one package responsible it would help a lot.

The package responsible is the person that receives emails notifications in case the package wasn't built successfully. How do we calculate it?

We get the most recent date from the following evaluations:

  • Last modified date on the package level

  • Last deploy made on one of the artifacts of the package (iflow, value mappings, ...)

  • Package creation date


For each of the dates above there's a user associated with it. We consider the user as the package responsible. We implemented a delegation table by package regular expression and considering begin dates and end dates so that we could accommodate absences from the package responsible.

Another failsafe mechanism we did was that if a package fails for 10 consecutive days, we send the email not only to the package responsible but we cc the whole team DL as well so that someone can act on it.

Value added


We always have a main contact per package. It can be that this is not the most familiar person with the package but at least is a person who lastly interacted with it.

 

Summary


In this first part, we introduced the CI/CD tools we used, we highlighted the importance of using backups, how to get these backups, what quality measures are we enforcing to all CPI packages, what is currently being documented and how do we calculate a package responsible. On the next part we'll discuss in more detail what we did in regards to quality control

I would invite you to share some feedback or thoughts on the comments sections. You can always get more information about cloud integration on the topic page for the product.
17 Comments
Labels in this area