Running Sample Project using SAP Cloud SDK Pipeline on Jenkinsfilerunner
This blog describes the simple overview of the way how CAP (Cloud Application Programming) project is runnable using SAP cloud SDK pipeline on Jenkinsfile runner (Pipeline as a Service). In this context, currently, an example project named “GettingStartedBookShop” provided by SAP Cloud SDK team is used as an example project for CAP scenario. The project is run using SAP Cloud SDK pipeline while the pipeline is running on the Jenkinsfilerunner.
There are some couple of things that needs to be prepared before running the pipeline. Firstly, ensure that the cluster is configured and installed with jenkinsfilerunner image without which it is not possible to run a pipeline on jenkinsfilerunner. Detailed information on how to use and configure that image is explained in this link.
The cluster installed with jenkinsfilerunner provides some custom resource definitions (CRD) for example, tenants, tenant namespaces and so on. Ensure the elasticsearch instance is configured to receive the build logs and eventually the logs are being sent to Kibana.
Installation of Elasticsearch and Kibana instances are described in detail here.
Firstly, create a Client namespace,as described here.
After that, create a tenant resource inside the client namespace as shown here.
Now, the client namespace contains the list of tenants with their tenant namespace.
To find the list of tenants in client namespace, use the below command:
kubectl get tenants -n <CLIENT_NAMSEPACE>
which produces the following result:
NAME READY TENANT-NAMESPACE AGE tenant-1 True tenant-1-ns 57d tenant-2 True tenant-2-ns 21d
After obtaining the tenant namespace, create a pipelinerun resource as shown below and deploy it in the tenant namespace.
apiVersion: steward.sap.com/v1alpha1 kind: PipelineRun metadata: generateName: running-cap spec: args: GIT_COMMIT_TO_BE_BUILT: "master" GIT_CREDENTIAL_ID: GIT_URL: "https://github.wdf.sap.corp/D070410/GettingStartedBookshop.git" GIT_BRANCH: "master" jenkinsFile: repoUrl: https://github.wdf.sap.corp/D070410/cloud-s4-sdk-pipeline.git revision: 1.0 relativePath: s4sdk-pipeline.groovy logging: elasticsearch: runID: buildNumber: 1 secrets: - fortify-credential - checkmarxscan-credential - whitesource-credential
Define the parameters values for the arguments as shown above for instance GIT_COMMIT_TO_BE_BUILT, GIT_URL , GIT_BRANCH of the project that you wanted to run with the Jenkinsfile runner. Define the repository url under “repoUrl” where the pipeline is existing. In this example, I am using the cloud SDK pipeline and define the “relativePath” where the jenkinsfile is existing. the list of secretes are deployed as the Kubernetes secrets in the tenant namespace.
Define all the credentials relevant to authenticate with other external servers for example checkmarx, fortifyscan etc as secret resources in the tenant namespace. There exists several types of credentials. Map the secret resource to the of type, for example basic-auth, secret text, secret file etc. This mapping is done via using kubernetes-credentials-provider-plugin.
In the above example, the logs of the pipelinerun are transferred to the elasticsearch and eventually, the logs of the pipelinerun can be seen in Kibana.
After creating a yaml file of pipelineRun resource, the same thing is deployed in to tenant namespace with the following command:
kubectl create -f <PATH_TO_FILE> -n <TENANT_NAMESPACE>
After the resource is deployed, it can be fetched with the below command:
kubectl get pipelineruns -n <TENANT_NAMESPACE> -owide
The above command returns the list of pipelineruns running/ran inside the tenant namespace.
Creating a pipelinerun within a tenant namespace, creates a sandbox namespace (available only till the pipelinerun is running and immediately after pipelinerun ends, it vanishes as it is stateless), within which the jenkinsfile runner pod and the dynamic agent pod for running different containers run. During this pod execution, all the secrets part of tenant namespace, are being copied into the sandbox namespace for authentication with different services. So, the actual pipelinerun gets executed inside the Jenkinsfilerunner pod.
The logs of the pipelinerun are visualized from the pod or by connecting the elastic search instance to Kibana.
Following command is used to visualize the logs from the pod.
Kubectl get pods -n <SANDBOX_NAMESPACE> Kubectl logs <POD_NAME> -n <SANDBOX_NAMESPACE> -f
Visualizing the logs in Kibana is already described in my previous blog.
In the end, I would like to conclude that all the stages part of SAP cloud SDK pipeline are runnable on Jenkinsfilrunner.