Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
axelalbrecht
Advisor
Advisor
Many customers and partners keep asking us about Continuous Integration / Continuous Delivery (CI/CD) - and for a good reason that is. The topic CI/CD and DevOps in general is very important when you want to create or maintain integration content, be it integration flows on Cloud Integration, or API Proxies and other elements on API Management.

Integration content developers want to keep track of what was changed, might want to keep multiple versions of their integration content, or want to do proper testing before transporting etc. Ideally, you’ve automated all these steps to save yourself work and effort.

The building blocks for all such possibilities are already available since quite a while on the SAP API Business Hub. A blog post talks about the OData APIs of Cloud Integration and also for API Management there are OData APIs available.

Different customers have different landscapes and processes. Therefore, also the CI/CD pipelines differ for every customer. Instead of supporting one approach and missing out on the others, we want to offer a construction kit with which you can easily select useful scenarios – such as integration flow deployment, storage of new integration flow versions, or creation of a backup of API providers  – and adapt them to your specific environment.

> > > > Update: In addition to our construction kit we also contributed commands to the Piper library. Details can be found on the blog post of my colleague Mayur. < < < <

> > > > Update2: check out this blog post about the SAP BTP CI/CD service, offering a low-code / no-code approach for CI/CD. A first pipeline for SAP Integration Suite is available. < < < <

Our construction kit consists of CI/CD pipeline scripts that use the Cloud Integration OData APIs. As source repository, we use Git; as build server, we have used Jenkins as these tools are very popular in the community.

As not everybody is familiar and comfortable with scripting, we have done the work for you and built the scripts in such a way, so that you only need to replace a few configuration parameters that are specific to your environment. You can leave the rest as is. Those of you who are experts in building script pipelines can use our pipelines as examples and extend/modify them according to your needs.

We’ll talk about the scripts in more detail below. Let’s first look into the necessary steps to use our pipeline scripts.

Prerequisites



  • Your build server is up and running. As we have used a Jenkins build server when building the pipelines, I will link to the Jenkins documentation: https://www.jenkins.io/doc/book/installing/.
    But you can use your own build server, as long as it’s able to process our pipeline scripts.

  • Your source repository / version management system is up and running. In our examples, we’ve used Git, but GitHub is fine, too.
    You probably need one Git repository for each pipeline script file and one Git repository for your whole integration content.

  • The SAP Integration Suite capabilities API Management and/or Cloud Integration or the corresponding standalone products are up and running. If you want to try out the trial version of SAP Integration Suite, this tutorial might help.

  • OData API access for API Management and Cloud Integration is enabled.
    In this first version of our pipeline construction kit, we support API Management only on Cloud Foundry as we are using OAuth2 for authentication and API Management uses Basic Authentication on Neo. We plan to provide an update for our API Management pipelines soon, supporting Basic authentication too.
    For setup OData API access instructions, check out the following topics:

    • For Cloud Integration, see Authentication. If you only want to perform non-modifying actions (e.g. download artefacts), you can use a read-only role for the API access. If you want to perform modifying actions (deploy artefacts, update configuration parameters, …) you need to add additional roles.








    • As output of the steps above, you get for each capability:

      • A tenant URL

      • An OAuth token URL

      • Client ID and client secret






Procedure


If the prerequisites are fulfilled, you’ll be ready to run our CI/CD pipelines with a few steps only!

Store Credentials on the Build Server


Instead of putting the credentials to your integration capabilities in readable text into the script pipeline, let’s use the secure store of the Jenkins build server. See also https://www.jenkins.io/doc/book/using/using-credentials/

To do this, open your Jenkins, go to Manage Jenkins > Security > Manage Credentials.

Store the following credentials with a separate alias/identifier:

  • Client credentials (client ID + client secret) for your integration capabilities

  • Credentials to access your Git repositories (the repository for your integration content and the repositories of your CI/CD pipelines)


The identifier that you enter here is needed either in the pipeline script configuration directly or when creating Jenkins environment variables (see next step).


Fig 1: Add Credentials in Jenkins


 

Add Parameters as Environment Variables on the Build Server


This step is optional, but we recommend to store all parameters such as credential IDs, host names etc. as environment variables in your Jenkins and to use those environment variables in your pipeline script configuration. Using this approach, you get a central place where you can easily change such parameters at any time without the need of touching them in every single pipeline script.

To do this, open your Jenkins, go to Manage Jenkins > Configure > Global Properties > Environment variables


Fig 2: Add Environment Variables in Jenkins


Be careful which variable names you choose as Jenkins uses certain predefined variable names. Overwriting them can cause undesired side effects. See the variable GIT_BRANCH as an example. For more details, see : https://plugins.jenkins.io/git/

Add Your Git User to the Global Jenkins Configuration


For every submission to Git you need a valid user. Instead of adding a configuration parameter for this user in each pipeline script, we have chosen the global Jenkins configuration.

To do this, open your Jenkins, go to Manage Jenkins > Configure System > section Git plugin. Enter the name and the email address of the user that will submit the changes to your integration content repository.


Fig 3: Add Global Git Configuration


 

Copy the Pipeline Script to Your Source Repository


As you have to configure the pipeline script to work with your environment, we recommend  to store it in a source repository. One of the benefits is that you avoid losing your configuration due to any unforeseen circumstances.

(Don’t forget to also add the credentials for the repository in Jenkins as described above, so that Jenkins will be able to access the pipeline script later.)

Create a Jenkins Job and Refer to the Pipeline Script


To execute the script you need to create a Jenkins pipeline job.

See for more information on pipelines: https://www.jenkins.io/doc/pipeline/tour/hello-world/

To do so, open your Jenkins and click on New Item. Enter a self-explaining name for your pipeline and - depending on your source repository type - select one of the following types for your Jenkins job.
1) Job type Pipeline.

Select this pipeline type if you store one pipeline script per source repository and your source repository is Git. Confirm with OK.


Fig 4: Select job type Pipeline


In the job configuration under section “Pipeline”, change the Definition to “Pipeline script from SCM”, choose Git as SCM and then provide the URL and credentials to your Git repository where your pipeline script is located.


Fig 5: Configure Jenkins Pipeline with Git Repository of Pipeline Script


 

When executing this Jenkins job, Jenkins will automatically search for a pipeline script called “Jenkinsfile” in the specified repository.
2) Job type Multibranch Pipeline

Select this pipeline type if you are using a different source repository, e.g. GitHub, or if you plan to store multiple pipeline scripts in one repository. Confirm with OK.


Fig 6: Select Job Type Multibranch Pipeline



Under section "Branch Sources" select your source repository.



Fig 7: Select Repository Type


Enter the repository URL and select the credentials for the repository (which you have uploaded before).



Fig 8: Enter Repository Details


Under “Build Configuration”, choose Mode “by Jenkinsfile” and provide the name of the pipeline script that you want to use for this Jenkins job. It's ok if the script does not exist yet.



Fig 9: Specify Pipeline Script Name in Jenkins Job


 

Save the Job configuration.

The job that you have created will perform the steps that will be given by the script. In case you want to change anything in your pipeline, don't touch the job! Instead, change the pipeline script!

Important: Avoid running Jenkins jobs in parallel that submit content into the same repository as it might come to conflicts otherwise, same like when two people are working with the same Git repository.

Define a Folder Structure in the Integration Content Repository


It is important to have a structured and consistent order for your integration content, so that you get the most out of your CI/CD processes.

In our scripts, we’ve used the following folder structure:


Fig 10: Folder Structure


If you want to use a different folder structure, you can specify this in the pipeline scripts via the corresponding parameter.

Important: Ensure that the folder structure that you want to use in your pipeline script already exists in your Git repository before running the pipeline, otherwise you will receive an error stating "Sparse checkout leaves no entry on working directory".

 

Now that you’re done with the setup, you can start configuring the scripts and perform the CI/CD processes on your SAP Integration Suite tenants using our pipeline scripts.

So, let’s have a look into the pipeline scripts and where you can find them.

Our Pipeline Scripts


You find our pipeline scripts here:

> > > > GitHub Community for Integration Recipes < < < <


After copying the script to the repository that you have linked in the Jenkins job, you can customize the script. As stated above, to make the consumption easy for you, we have bundled all relevant parameters at the top of the script so that you don't have to scan through the entire script just to update a parameter.

To enter your environment parameters on your Jenkins use "${env.<parameter name>}". As mentioned earlier, you also can enter the values like credential identifier or host names directly.


Fig 11: Example of a Pipeline Script with the Highlighted Configuration Part


Each pipeline comes with a readme file telling you about the pipeline’s purpose, about the required configuration parameters and about related pipelines. Also, if additional steps are required for a specific pipeline, you will find this information in the pipeline description.


Fig 12: Example of a Pipeline Description


 

Use Cases


Below, you will find a list of our use cases. Some steps might re-occur in multiple pipelines, some pipelines are completely independent of each other while some pipelines can be combined to cover more complex use cases. So first check all of them and then select the ones that fit your needs best.

 
Download an integration artefact from your Cloud Integration tenant and store it in a Git source repository.

You already have an existing integration flow on your tenant and want to store it in your source repository. This use case is the basis for certain activities around integration flow development:

  • Create a backup

  • Do security scans of the artefact resources like Groovy scripts

  • Edit scripts or XSLTs in an external editor


Fetch an integration artefact from Git and upload it to a Cloud Integration tenant, optionally you can also deploy it.

The counterpart to the pipeline for downloading an integration flow. You can upload the integration artefact to a new tenant; or you re-upload an integration flow after you’ve edited some of the resources. The new artefact version will be taken from the Manifest file inside the artefact.
Update a configuration parameter of an integration artefact on Cloud Integration

As you know, externalizing parameters is a great way of allowing non-technical users to change the integration flow logic without the need to understand the entire integration flow. But externalization is also useful for  CI/CD, as you can easily change those parameters for testing purposes or after you have uploaded an integration flow to a new tenant. This pipeline will do exactly that. After completing the tests, you could also revert the parameters back again.
Deploy an integration artefact on Cloud Integration and optionally get the endpoint

After updates to the configuration parameters have been done as described in the pipeline above, you can deploy the integration artefact. If we’re talking about an artefact with an http-based endpoint, you might be interested in the endpoint URL, which you could then use to send a test message.
Get the status of the last message processing log from your Cloud Integration tenant

After an integration process has been triggered – by a scheduler, by a message from a JMS queue, a file from a file server or via an external http-based call  – you might want to know the status of the last process, including error information in case of failure. If so, this pipeline might help.
Get either the message processing log status of a certain integration artefact or a certain MPL ID

Similar to the use case described above (Get the status of the last message processing log from your Cloud Integration tenant), yet this job is independent of any execution. So if you have a regular job running on your tenant and you want to know if it was performed successfully or not, you can use this pipeline for a regular check.
Deploy a scheduled/polling integration flow and check the message processing log status

This is a combination of two of the upper scenarios (flow deployment and MPL check). If you have a  integration artefact triggered by a scheduler, or one that is polling messages from a queue or a ftp system, you can deploy it and check for the last message processing log immediately afterwards.
Deploy a scheduled/polling integration flow, check the message processing log status and whether it was executed successfully, download the integration flow and store it in the Git source repository

This is an additional extension of the use case above (flow deployment and MPL check), useful for whenyou have an integration artefact triggered for example by a scheduler (with the Run Once setting). If the message processing runs fine, you know that the flow is in a good statet to download and store it in Git. Use it as a backup for further operations like security scans or in order to deploy it to other tenants as well.
Compare an integration artefact version on Cloud Integration with the version stored in your source repository. If the versions are different, download the version from Cloud Integration and store it in your repository.

If you want to ensure that every version change on Cloud Integration leads to an automatic storage in your source repository, this pipeline will help. It compares the versions and if they are different, the integration artefact will be downloaded and stored in Git.
Upload any modified resource of your integration artefact back to your Cloud Integration tenant

A very useful use case. As mentioned above, you might want to use an external editor for your script development. But how to bring the updated resources back into the tenant without deploying the whole flow? Deploying the whole flow as it’s stored in your repository is not a good idea as your colleagues might already have updated the flow directly in the tenant. This job will help you because it can be invoked by a commit of your source repository. It will check which resources (scripts, XSLT, ...) have been added, modified or removed. The job will update the integration flow on the tenant accordingly.
Undeploy an integration artefact

After all the testing activities you might want to undeploy an integration artefact to cleanup the tenant. This job will help you with that.
Download of an API Provider of API Management and store it in Git

This job will support you in your backup process or in the preparation for a transport to a different tenant by downloading an API Provider and storing it in Git.
Upload of an API Provider from Git to API Management

The counterpart of the API Provider download scenario. Use this job, if you want to upload the API Provider to a new tenant or you have to restore your backup.
Download of a Key-Value-Map of API Management and store it in Git

If you want to transport any API Proxy, you need the used Key-Value maps (KVM) to be in place first as otherwise the API Proxy won’t work in the target tenant. With this job, you can store the KVMs in Git.
Upload of a Key-Value-Map from Git to API Management

The counterpart to the Key-Value-Map (KVM) download. Use this job to transport it into a tenant or recovery it.
Download of an API proxy of API Management and store it in Git

This pipeline helps you to create a backup or transport an API proxy by downloading and storing it in Git.
Upload of an API proxy from Git to API Management

The counterpart to the API Proxy download pipeline. Upload an API proxy to a new tenant or restore a backup. Important: Ensure that the referred API provider and KVMs are available on the tenant, before you import the API Proxy.
Discover and download all API providers of API Management and store them in Git

A mass operation to download all existing API providers of the API Management tenant and store them in Git.

 
More use cases?

If you think that we’ve missed important use cases, I would like to invite you to contribute more pipelines or to enhance the existing ones via our GitHub community. Let’s improve the DevOps practices around SAP Integration Suite together.
20 Comments