Skip to Content
Technical Articles
Author's profile photo Shanthakumar Krishnaswamy

SAP CAP (Java) with Azure Cosmos DB for PostgreSQL


In this blog post, our main focus will be on harnessing the potential of  Azure Cosmos DB  global database within the CAP framework. As a reference point, I recommend checking out my previous blog post that covered the steps for preparing the development environment and creating a new project.

In certain scenarios, businesses may find the need to incorporate an additional relational database alongside SAP HANA Cloud. In such cases, Azure Cosmos DB presents a comprehensive and feature-rich solution. With Azure Cosmos DB, organizations can effectively develop and operate globally distributed applications. By combining Azure Cosmos DB with SAP HANA Cloud can establish a robust and flexible database environment to fulfill a wide range of application requirements and achieve outstanding results.


The following preparations are required to follow the steps and deploy the project by yourself:

– SAP BTP PAYGO or CPEA agreement
– SAP BTP subaccount
– Entitlements for SAP BTP, Cloud Foundry runtime
– Azure cloud platform subscription

Using Azure Cosmos DB

The initial stage of this process involves creating an Azure Cosmos DB, specifically employing the PostgreSQL Compatible Edition for our specific use case. Unlike the mta.yaml approach, this task cannot be automated and requires either manual setup or the utilization of automation tools like AWS CloudFormation or Terraform. For detailed instructions on setting up an  Azure Cosmos DB, you can refer Create an Azure Cosmos DB for PostgreSQL cluster in the Azure portal

Ensure that you have configured IP firewall in Azure Cosmos DB for all the Nat IPs associated with your BTP Region

Let’s suppose that you have followed the steps outlined in my previous blog post and successfully created a new project. With the development environment prepared and the new project in motion, we can now move forward to explore the possibility of integrating  Azure Cosmos DB into the CAP framework.

Now that we have the database and base project ready, we can proceed with the deployment of the database schema. Let’s begin this phase of the process.

  1. Execute the following command to create a User-Provided service, utilizing the configuration details from  Azure Cosmos DB.
    cf create-user-provided-service <service name> -p '{\"dbname\": \"<dbname>\",\"hostname\": \"<db host>\",\"password\": \"<password>\",\"port\": \"<db port>\",\"schema\": \"public\",\"username\": \"<db user>\",\"sslrootcert\": \"<root certificate>\"}'
    cf create-user-provided-service sample-db -p '{\"dbname\": \"citus\",\"hostname\": \"\",\"password\": \"<password>\",\"port\": \"5432\",\"schema\": \"public\",\"username\": \"citus\",\"sslrootcert\": \"-----BEGIN CERTIFICATE-----\nMIIDjjCCAnagAwIBAgIQAzrx5qcRqaC7KGSxHQn65TANBgkqhkiG9w0BAQsFADBh\nMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\nd3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\nMjAeFw0xMzA4MDExMjAwMDBaFw0zODAxMTUxMjAwMDBaMGExCzAJBgNVBAYTAlVT\nMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j\nb20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IEcyMIIBIjANBgkqhkiG\n9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuzfNNNx7a8myaJCtSnX/RrohCgiN9RlUyfuI\n2/Ou8jqJkTx65qsGGmvPrC3oXgkkRLpimn7Wo6h+4FR1IAWsULecYxpsMNzaHxmx\n1x7e/dfgy5SDN67sH0NO3Xss0r0upS/kqbitOtSZpLYl6ZtrAGCSYP9PIUkY92eQ\nq2EGnI/yuum06ZIya7XzV+hdG82MHauVBJVJ8zUtluNJbd134/tJS7SsVQepj5Wz\ntCO7TG1F8PapspUwtP1MVYwnSlcUfIKdzXOS0xZKBgyMUNGPHgm+F6HmIcr9g+UQ\nvIOlCsRnKPZzFBQ9RnbDhxSJITRNrw9FDKZJobq7nMWxM4MphQIDAQABo0IwQDAP\nBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAdBgNVHQ4EFgQUTiJUIBiV\n5uNu5g/6+rkS7QYXjzkwDQYJKoZIhvcNAQELBQADggEBAGBnKJRvDkhj6zHd6mcY\n1Yl9PMWLSn/pvtsrF9+wX3N3KjITOYFnQoQj8kVnNeyIv/iPsGEMNKSuIEyExtv4\nNeF22d+mQrvHRAiGfzZ0JFrabA0UWTW98kndth/Jsw1HKj2ZL7tcu7XUIOGZX1NG\nFdtom/DzMNU+MeKNhJ7jitralj41E6Vf8PlwUHBHQRFXGU7Aj64GxJUTFy8bJZ91\n8rGOmaFvE7FBcf6IKshPECBV1/MUReXgRPTqh5Uykw7+U0b6LJ3/iyK5S9kJRaTe\npLiaWN0bfVKfjllDiIGknibVb63dDcY3fe0Dkhvld1927jyNxF1WW6LZZm6zNTfl\nMrY=\n-----END CERTIFICATE-----\"}

    Note: The application requires a certificate file generated from a trusted Certificate Authority (CA) certificate file (.cer) to connect securely. The certificate to connect to an Azure Cosmos DB for PostgreSQL is located at Download the certificate file first, following which you should proceed with transforming it into the format illustrated in the example.

  2. The @cap-js/postgres package utilizes the cds-plugin technique to automatically configure your application and use a PostgreSQL database for production.To add the necessary database packages by executing the following commands inside your project’s root directory.
    npm add @cap-js/postgres
    npm add @sap/cds-dk@7
  3. Enhance the package.json to incorporate database details, custom scripts, and build tasks as follows.
     "scripts": {
        "deploy": "cds-deploy"
      "cds": {
        "build": {
          "target": ".",
          "tasks": [
            {"for": "nodejs", "src": "db","options": {"model": ["db","srv"]}},
            {"for": "java","src": "srv","options": {"model": ["db","srv"]}}
        "requires": {
          "db": {
            "kind": "postgres",
            "impl": "@cap-js/postgres",
            "pool": {
              "acquireTimeoutMillis": 3000
            "vcap": {
              "label": "user-provided"

    In the updated package.json, we have introduced several modifications. Let’s examine them individually:

    • scripts.deploy: The hyphen in “cds-deploy” is essential because we do not utilize “@cds-dk” for deployment. In case you are interested in using “@cds-dk” for other reasons, you may consider incorporating the apt-buildpack in your deployment module.
    • There are two build tasks to facilitate a Cloud Foundry deployment. One task is for Node.js, and the other is for Java. This approach empowers us to handle database schema deployment using Node.js while executing the application through Spring Boot.
    • requires.db.pool.acquireTimeoutMillis: This parameter determines the duration allowed for waiting until an existing connection is retrieved from the pool or a new connection is established. By default, this value is set to 1000 milliseconds. If the database connection is taking longer than expected, you can increase this parameter to allow for a longer waiting time.
    • requires.db.vcap.label: If a service is bound to your application and carries the label “postgresql-db,” it is automatically chosen as the default option. This feature is particularly valuable in cases where user-defined services are used. As we are currently utilizing a user-provided service, please retain the value as “user-provided”.
  4. Now, after enhancing the package.json, we have the ability to manually initiate the build by executing the cds build command, which will generate files and folders ready for deployment. However, note that executing this step right now is not mandatory as it will happen automatically during the mta build stage The next step is to proceed with the final preparation by creating the mta.yml file for deployment.
  5. Use the following command in the project’s root folder to generate the mta.yml file with the module and resource definitions.
    cds add mta
  6. The mta.yml file generated in the previous step will need some adjustments before it can be deployed.
    • To leverage the user-provided service created in the previous step, integrate the given resources definition into the mta.yml file.
        - name: sample-db
          type: org.cloudfoundry.existing-service
    • To allow the server module to utilize the user provided service, simply add the “requires” statement with the service name.
            - name: srv-api # required by consumers of CAP services (e.g. approuter)
                srv-url: ${default-url}
            - name: sample-db  
    • To facilitate the deployment of the database schema, including tables and views to the  Azure Cosmos DB, we must define the following deployer module.
        - name: pg-db-deployer
          type: nodejs
          path: .
            buildpack: nodejs_buildpack
            stack: cflinuxfs4
            no-route: true
            no-start: true
            disk-quota: 2GB
            memory: 512MB
            - name: deploy
              command: npm run deploy
              disk-quota: 2GB
              memory: 512MB
            builder: npm-ci
                  - npm install --production
                  - npx cds build --production​
            ignore: ["node_modules/", "mta_archives/","tmp/","srv/target/"]
            - name: sample-db   

      The final mta.yml file can be viewed here.

  7. At this point, we have the option to build and deploy only the pg-db-deployer module. However, we will go further by configuring the Spring Boot connection details to deploy both modules together.
  8. To integrate the PostgreSQL dependency, add the following code snippet to the srv/pom.xml file:
  9. Incorporate the cds-dk version in the srv/pom.xml file, ensuring that it matches the version specified in package.json. For instance, you can add the<version>7.0.3</version> under configuration
  10. <execution>
  11. By incorporating the specified database connection details into the application.yaml file, your SAP CAP application will seamlessly establish a connection with the  Azure Cosmos DB using the credentials provided.
      config.activate.on-profile: cloud
          driver-class-name: org.postgresql.Driver
          url: jdbc:postgresql://${}:${}/${}
          username: ${}
          password: ${} 
          initialization-mode: never
            maximum-pool-size: 10

    Note: Replace “sample-db” with the name of your user provided service instance specified in the mta.yaml file:

All configurations have been completed, and we are now fully prepared for the deployment phase.

Deploy to BTP Cloud Foundry Runtime

To deploy your application to the SAP Business Technology Platform (BTP) Cloud Foundry Runtime, follow these steps:

  1. To generate a single mta.tar archive, execute the following command in project root
    mbt build

    This command will package all the components of the Multi-Target Application (MTA) into a single mta.tar archive, which can then be used for deployment or distribution.

  2. Congratulations! You have reached the final stage. Now, you can proceed with deploying the previously generated archive to Cloud Foundry by executing the following command:
  3. cf deploy .\mta_archives\sample_1.0.0-SNAPSHOT.mtar
  4. After the successful deployment, the application is now prepared for testing. Obtain the application URL from the BTP cockpit, or alternatively, execute the cf app sample-srv command to retrieve it.


    Application home screen

    Click on books entity and enter “system” as a username, skip the password then click on sign in. This will display the books entity sample data.

Schema validation

To directly connect to a  Azure Cosmos DB in your local environment, follow these steps

  1. Sign in to the Azure portal and Manage public access for Azure Cosmos DB for PostgreSQL.  Remember that the access is granted only from your current IP, so in case your IP changes, you should update the security group rules accordingly.
  2. Download and install the community edition of DBeaver and connect to Azure Cosmos DB HOST: Azure Cosmos DB Host
    PORT: Azure Cosmos DB Port
    DATABASE: “dbname” tag value from environment variable
    USERNAME: “username” tag value from environment variable
    PASSWORD: “password” tag value from environment variable

    DBeaver Connection Settings

  3. After configuring the connection settings in DBeaver, click the “OK” button to save the connection. Once the connection is established, you can proceed to explore the “public” schema of the Azure Cosmos DB. This schema typically contains the tables and objects based on your CDS views


Delta deployment

  1. Make changes to the “db\data-model.cds” file by adding descriptions..
    namespace my.bookshop;
    entity Books {
      key ID : Integer;
      title  : String;
      description: String;
      stock  : Integer;
  2. By executing these commands, you can build and deploy only the db module, ensuring that the changes made to the data model are reflected in the Azure Cosmos DB.
    mbt build
    cf deploy .\mta_archives\sample_1.0.0-SNAPSHOT.mtar -m db-deployer
  3. After deploying the updated data model changes and sample data, validate them using DBeaver.

Schema Deployment from local host

In the development phase, we anticipate the need for multiple schema deployments, and it’s evident that using the MTA module could result some delays, In light of this, let’s explore an alternate approach by opting for direct deployment from your localhost.

  1. Let’s generate a “default-env.json” file in the project root, utilizing the application’s VCAP_SERVICES from environment variable.
  2. Let’s proceed with the deployment by executing the following commands.
    cds build --production​
    cds deploy

    That’s it! Now, the changes have been deployed directly from your localhost, eliminating the need for mbt build.

Related Blogposts

Run and Deploy SAP CAP (Node.js or Java) with PostgreSQL on SAP BTP Cloud Foundry | SAP Blogs

Architecting solutions on SAP BTP for High Availability | SAP Blogs


I trust this gives you a brief insight into the process of running a CAP Java application on BTP while leveraging Azure Cosmos DB as the database. Moreover, this approach can be expanded to incorporate various data sources, such as PostgreSQL Service, Amazon Aurora for PostgreSQL, Google Cloud AlloyDB, and others. The possibilities for integrating different data sources are vast, offering flexibility and scalability to meet diverse application requirements.

We highly appreciate your feedback and welcome any comments or questions you may have.




Assigned Tags

      1 Comment
      You must be Logged on to comment or reply to a post.
      Author's profile photo Martin Pankraz
      Martin Pankraz

      Very nice Shanthakumar Krishnaswamy! Thanks for sharing.

      How about linking the SAP Private Link option and the associated blog for this scenario for completness?

      Cheers Martin