Skip to Content
Technical Articles

Run and Deploy CAP with PostgreSQL on SAP Cloud Platform Cloud Foundry (Node.js)

PostgreSQL support for the SAP Cloud Application Programming Model (CAP) has been in the news lately, thanks to ranking 1st and 3rd in SAP’s Devtoberfest and some blog posts introducing the related Open Source project(s) from the SAP Community:

  1. PostgreSQL persistence adapter for CAP (Node.js)
  2. Getting started with CAP on PostgreSQL (Node.js)

The first post by Volker Buzek and Gregor Wolf is an introduction to cds-pg, the database adapter for PostgreSQL. The second post contains a step-by-step guide on how to get started with PostgreSQL in a local development environment. It also contains the introduction of cds-dbm, the tool required to deploy the CDS data model to the PostgreSQL database.

With the latest developments in both Node.js modules, it is now possible to deploy a CAP application with PostgreSQL as a database to SAP Cloud Platform Cloud Foundry.

Since there also is a native PostgreSQL service available on Cloud Foundry (PostgreSQL on SAP Cloud Platform, hyperscaler option), the deployment can be done without any extra work (creating a User Defined Service, manually managing the database schema…), just by leveraging CAP, the additional PostgreSQL related Node.js modules and the MTA build and deployment tools.

If this grabs your attention, then you will find a comprehensive description of the required steps to prepare and deploy a local CAP project running on PostgreSQL (the one created in my first blog post) to a SAP Cloud Platform CF Trial account below.

Honorable mention
Gregor Wolf did an outstanding job in getting his head through the various deployment scenarios in his pg_beershop and created the necessary foundation for the SAP Cloud Platform deployment. His repository also contains various other deployment scenarios. So if you want to deploy to Kyma, Azure or even Google Cloud Platform, have a look at his Github repo.

Prerequisites

If you want to follow the steps and deploy the project by yourself, you have to do some preparations:

  • You need a SCP trial account (tutorial is here)
  • Ensure, that the PostgreSQL, hyperscaler option is in the list of entitlements of your trial account. If not, then you need to add the service manually.
  • You need to have the CF command line tools installed on your system (tutorial is here)
  • You need to have the CF multi apps plugin installed (tutorial is here)
  • You need to have the devtoberfest project in your local environment (via git clone https://github.com/mikezaschka/cap-devtoberfest.git)
  • You need to have VS Code (or any other code editor) at hand

Preparing the project

The following steps are based on the small devtoberfest-related CAP project I introduced in my first blog post. If you want to follow along to deploy the project by yourself, just use the local-dev-only branch (git checkout local-dev-only). The main branch already contains all the stuff we are going to do.

Create a mta.yml file

The first thing we need to do is to generate a mta.yml file, that will contain the module and services definitions for SAP Cloud Platform CF. We can do this by hand, but @sap/cds is supporting us by providing a command:

cds add mta

This will generate the mta.yml file in the root folder of the project. Please ignore the warning, that no SAP HANA module will be created. That’s totally fine, since we do not want to use HANA, but PostgreSQL instead.

Important notice on the cds command

The cds command is widely used for all sorts of things around CAP (build, deploy, add, etc…). Sadly, as of now, the CAP Node.js modules are very strict when it comes to extending the cds command and that’s the reason why cds-dbm needs to introduce its own set of deploy and build commands or needs to override some internal stuff to hook itself into the framework. This will hopefully change in the future, but for now we have to live with it.
When working with PostgreSQL on CAP this means, that certain functionality cannot be used (cds watch, cds build) or might break, when PostgreSQL specific things are added (e.g cds add mta will not work anymore, when the build tasks have been defined, as described in the section below).

So please SAP (cc: Daniel HutzelIwona Hahn), make loading custom build tasks possible and introduce hooks to apply custom sub commands and/or options. Thank you! ?

Create the cds build options

Before we are going to alter the generated mta.yaml file, we first need to add the build configuration. CAP comes with a default configuration, but we need to explicitly redefine the build tasks, because we want to make use of the PostgreSQL related task that is required to prepare everything for a Cloud Foundry deployment.
The following code snippet needs to be either inserted in the .cdsrc file of your project or directly integrated into the package.json, right unter cds:

"build": {
    "tasks": [
        {
            "use": "node-cf",
            "src": "srv"
        },
        {
            "use": "postgres-cf",
            "src": "db",
            "options": {
                "deployCmd": "npx cds-dbm deploy --load-via delta --auto-undeploy"
            }
        }
    ]
}

The configuration basically specifies two build tasks, one for our server module (node-cf, which is a standard build task provided by @sap/cds) and the other one to prepare the PostgreSQL related stuff (postgres-cf).
The deployCmd option inside postgres-cf will be used on Cloud Foundry to trigger the actual deployment, but is only required, if want to use a different behaviour, than the default one (which is: npx cds-dbm deploy --load-via delta). For a description of all available options, have a look at the documentation.

We can now trigger the build manually right after we applied the configuration by simply executing the following command:

npx cds-dbm build

This should have generated a gen folder in the project, with two sub folders, one for each build task. The PostgreSQL-specific stuff has been written to gen/db and contains the following files:

  • csn.json
    The CSN definition of the data model.
  • package.json
    The package.json containing the required dependencies.
  • deploy.sh
    A shell script which will be executed to trigger the deployment and prepares the environment. This is required because cds-dbm internally is relying on the Java-based liquibase framework.
  • apt.yml
    A required configation file for the CF apt-buildpack, that is used to add Java to the default Node.js-buildback.
  • manifest.yml
    In case you want to use cf push instead of cf deploy
  • undeploy.json
    The file to contain tables and views, that should be removed from the PostgreSQL database (if one is available)
  • data/*csv
    All relevant .csv files copied from the original db folder that contain default data, that will be imported during deployment, based on the defined load strategy.

Since all generated files and folders are ready to be deployed, we can just go over to the last preparation step and adjust the mta.yml file.

Update the mta.yml file

The initially generated mta.yml file needs to be adjusted and enhanced in several places:

Update the build command

The first thing to do is to replace the default cds build command with the one from cds-dbm:

build-parameters:
  before-all:
    - builder: custom
  commands:
    - npm install --production
    # use cds-dbm for building, not cds
    - npx cds-dbm build --production

Define the PostgreSQL service

As a second step, we need to define, that we want the PostgreSQL service to create an instance, that we can use. For this, just add the following resources definition:

resources:
  - name: devtoberfest-database
    parameters:
      path: ./pg-options.json
      service: postgresql-db
      service-plan: trial
      skip-service-updates:
        parameters: true
    type: org.cloudfoundry.managed-service

There are two important aspects of this definition:

  • The skip-service-update is mandatory, since the PostgreSQL service in the trial environments does not allow any updates to the configuration after the initial deployment.
  • The referenced pg-options.json file can contain configuration parameters for the PostgreSQL service. A list of possible values is available in the official documentation.
    Since we want to use version 11 of PostgreSQL, we can just add this to the pg-options.json file:

    {
        "engine_version": "11"
    }​

Create the postgres-deployer module

As a next step, we need to define the deployer module, that will be required to deploy the database schema (tables and views) to the PostgreSQL database. While the general idea is very similar to the default HANA deployment, the deployer module itself is different (thanks again Gregor Wolf for spending some sleepless nights on this one).
Please add the following definition in your modules section:

Update 01.12.2020
Updated the task`s command from ./deploy.sh to chmod 755 deploy.sh && ./deploy.sh to support building and deploying from Windows based systems.
For more information, please look in the comments below.

  - name: devtoberfest-db-deployer
    type: custom
    path: gen/db
    parameters:
      buildpacks: [https://github.com/cloudfoundry/apt-buildpack#v0.2.2, nodejs_buildpack] 
      no-route: true
      no-start: true
      disk-quota: 2GB
      memory: 512MB
      tasks:
      - name: deploy_to_postgresql
        command: chmod 755 deploy.sh && ./deploy.sh
        disk-quota: 2GB
        memory: 512MB      
    build-parameters:
      ignore: ["node_modules/"]
    requires:
      - name: devtoberfest-database 

This basically defines a new module/application, that does not use a pre-defined type, but is relying on some custom configuration:

Instead of one, two buildpacks will be used for the runtime. Next to the default Node.js buildpack, we also include the apt-buildpack, which enables us to install Java next to Node.js onto the runtime container. And since we only want to trigger the deployment and do not keep the application running (and consuming precious resources), we do not apply a route (no-route) and also no start script (no-start) neither. Instead, we define a Cloud Foundry task (with an arbitrary name), that calls the deploy.sh script, which, as mentioned above, sets some required environment variables and then just triggers the deployCmd(default or specified in the package.json).

Add the postgres service to the server module

As a final step, we need to add the PostgreSQL service to the server module by simply adding the requires statement:

  # --------------------- SERVER MODULE ------------------------
  - name: devtoberfest-srv
    # ------------------------------------------------------------
    type: nodejs
    path: gen/srv
    provides:
      - name: srv-api # required by consumers of CAP services (e.g. approuter)
        properties:
          srv-url: ${default-url}
    # also use our PostgreSQL database
    requires:
      - name: devtoberfest-database

And that’s it, we applied all the configuration required and are now ready to go on with the actual deployment.

The final mta.yml file can be viewed here.

Add mbt to the package.json

As a last, but important step, which isn’t related to something PostgreSQL specific:
Since we want to deploy our application using the MTA approach and our app will consist of multiple apps and services, we will leverage the Cloud MTA Build Tool (MBT) provided by SAP to create the required .mtar file. As we are using Node.js, we can simply add the mbtmodule as dev dependency in the package.json and trigger the install via npm ifrom our project folder.

"devDependencies": {
  "mbt": "~1.0.16"
},

Deploying to SAP Cloud Platform

The deployment part is straight forward and simple: just use one command to trigger the build and then another one to deploy to SAP Cloud Platform.

Build via mbt build

The first task to be done is to build the whole project and create the .mtar file. Thanks to the Cloud MTA Build Tool, this is just a simple call on the command line:

npx mbt build

When this is done, you should now have a deployable file, which contains all the source code and instructions required to deploy and run the project on SAP Cloud Platform: mta_archives/devtoberfest_1.0.0.mtar.

Deploy via cf deploy

Before starting the final step, make sure you are logged into your Cloud Foundry trial space through the command line tools (cf login). Then, just trigger the deployment:

cf deploy mta_archives/devtoberfest_1.0.0.mtar

The deployment will take some time, since (at least in my case), the creation of the PostgreSQL service took several minutes. But in the end you should get the message, that everything has been executed and successfully deployed:

  • The PostgreSQL service instance has been created and is available.
  • The devtoberfest-db-deployer module has been created and the task executed to deploy the data model to the PostgreSQL database
  • The devtoberfest-srv module has been created and started, so it should be available as an HTTP entpoint

If you now go to the Cloud Cockpit of your Trial account, you should see the deployed server application and the available, but stopped deployer module.

Now simply open the URL of your server application in the browser and voilá, there are the Devtoberfest projects, deliciously served from your PostgreSQL database through CAP on Cloud Foundry.

Some enhancements

In the devtoberfest project on GitHub, I wrapped the build and deploy calls for convenience in some npm scripts:

  • npm run build:cf will call mbt build internally
  • npm run deploy:cf will call cf deploy internally

I also removed the database credentials for the local development from the package.json file, since we don’t want to move them around all the time. In the CF, the database credentials get automatically injected during the deployment and via environment variables. The local credentials should therefore be placed in a default-env.json file (please also look at the readme of the project on GitHub)

Some closing words

In this post I have shown the various steps required to deploy a CAP project with PostgreSQL as a database to SAP Cloud Platform Cloud Foundry. And if you have already some experience with CAP and CF deployments using SAP HANA, you should see, that there are only small things, that differ in the whole process. This even applies to continuous deployments of your application: If you adjust things in your local application and do a re-deploy, cds-dbm will not drop your already deployed schema and data, but identify the delta between the existing schema/data and your updated data model and just alter everything safely.

While this sounds and looks already very promising, please keep in mind, that both, cds-pg and cds-dbm are still very young and in active development and there are some features missing to have parity with the default CAP functionality (see the todo list in the cds-pg project). Nevertheless I would encourage you to start using both for cases, in which you want to take advantage of the power of CAP but where SAP HANA may be oversized or somehow not suitable to act as the persistence layer.

And if you encouter some bugs or identify some missing features, please reach out to the developers on GitHub and file in respective issues… or even better, since the libraries are Open Source, get your hands dirty and directly contribute to the projects!

5 Comments
You must be Logged on to comment or reply to a post.
  • Thanks Mike for the  blog

    I tried and got below mentioned error

    Error%20-%20apt%20key

     

    Error – apt key

     

    Permission%20denied%20to%20execute%20the%20deployer.sh

    Permission denied to execute the deployer.sh

    Thanks Mike and Gregor Wolf for assisting

    Please make a note that I am using non-trial subaccount. 

    • Hi Sandeep,

      first of all, thanks for starting to use the cds-pg tooling.
      Regarding your problems:

      The first one seems to be a warning. We may look into this, but it should not break the deployment.

      The second one is an actual error. The deploy.sh script needs to be executable and it seems that’s not the case for you. Can you please give me some more information?

      • Operating System on which the build and deploy were triggered
      • Version of cds-dbm

      I assume you are using Windows? I’ll probably need to adjust the file permissions to work cross OS.
      In the meantime, you can change the deployer task to modify the file permissions before calling the script:

        - name: devtoberfest-db-deployer
          type: custom
          path: gen/db
          parameters:
            buildpacks: [https://github.com/cloudfoundry/apt-buildpack#v0.2.2, nodejs_buildpack] 
            no-route: true
            no-start: true
            disk-quota: 2GB
            memory: 512MB
            tasks:
            - name: deploy_to_postgresql
              command: chmod 755 deploy.sh && ./deploy.sh
              disk-quota: 2GB
              memory: 512MB      
          build-parameters:
            ignore: ["node_modules/"]
          requires:
            - name: devtoberfest-database      
      • Thanks for your guidance.

        Am using Windows to execute the mbt built and cf deploy and  cds-dbm version is ~0.0.17

        I am able to deploy after changing the command as suggested by you .

        Thanks once again.

  • There’s not much to say, Mike. That’s a great post that shows how to use even greater tools!  I especially like how you kept the cap-pg flow analogous to the classical cap-Hana flow to make it easy for everyone!