Skip to Content
Technical Articles

Getting started with CAP on PostgreSQL (Node.js)


The SAP Cloud Application Programming Model is an opinionated framework of libraries, languages and tools by SAP to build enterprise-grade cloud-based services and applications. It comes with pre-baked support for SAP technologies (SAP HANA, SAP Event Mesh, etc.), but is (more and more) designed to be open to other tools, platforms and standards.
One major part of CAP is the domain model, in which all the domain entities can be defined via Core Data Services (CDS) and either be connected to external services or databases. Out of the box CAP has native support for SAP HANA (and SQLite for development scenarios only), but is also designed to support bring-your-own-database-scenarios (at least since version 4.1).

PostgreSQL is a powerful Open Source database, that can be downloaded and used for free and also is available on almost every PaaS-Provider, including SAP BTP (PostgreSQL on SAP BTP). While SAP HANA may be the better choice in CAP projects closely related to other SAP systems (S/4HANA, etc.), PostgreSQL may be a powerful and cheaper alternative in other scenarios (that maybe aren’t connected to SAP BTP at all).

But how can PostgreSQL can be used in CAP?

By using two Open Source Node.js modules in combination with @sap/cds:

  • cds-pg
  • cds-dbm

cds-pg – The PostgreSQL adapter for SAP CDS

Since PostgreSQL support is not natively available in CAP, the integration must be provided by others. In August 2020, Volker Buzek and Gregor Wolf started their efforts to build cds-pg, a PostgreSQL adapter for CAP and made it Open Source on GitHub. They also shared their vision and invited others to contribute in a blog post.
Since then, some community members (David Sooter, Lars Hvam, Mike Zaschka) contributed to the project and while it is not yet ready to be used in production, cds-pg already supports many features of CAP (which will be shown later in this post).

cds-dbm – Database deployment for PostgreSQL on CAP

While cds-pg contains the functionality to translate the CDS model to native PostgreSQL during runtime, there is a closely related library available, that deals with the deployment of the generated database schema (tables and views): cds-dbm.
For SAP HANA, SAP is providing the @sap/hdi-deployer module, that handles all the relevant deployment tasks (analyze the delta between the current state of the database and the current state of the CDS model, deploy the changes to the database, load CSV files, etc.). cds-dbm provides this functionality for cds-pg and is designed to support other potential CAP database adapters (think of SQL Server, DB2, MySQL, MariaDB, Amazon RDS…) in the future (that’s why the functionality is not baked in cds-pg, but in its own module).

Start using cds-pg and cds-dbm in a CAP project

There are already some projects available, that can act as a reference on how to use cds-pg and cds-dbm, e.g. the pg-beershop project by Gregor Wolf (which includes deployment scenarios to many different Cloud Service providers), but since this blog post should also showcase, that the development workflow feels very similar to native CAP development, we will start from scratch.
If you just want to look at the source code, you can find it on GitHub.


To follow along the upcoming steps, you need to have the following tools installed on your system:

  • Node.js (version 12)
  • Java (JRE in at least version 8)
  • Docker (for running the PostgreSQL database)
  • Visual Studio Code (or another editor)

Many of the steps that need to be done are part of the default development workflow of CAP. A more detailed explanation of those standard steps can be found in the official documentation.

Create the initial project

To create the initial project, you need to have the @sap/cds-dk library installed as a global Node.js module.

npm i -g @sap/cds-dk

With this in place, we can kickstart the project by letting the @sap/cds-dk generate the devtoberfest project. While SAP’s Devtoberfest is/was real our project will just act as a demo and contain a data model leveraging projects and votes.

cds init devtoberfest

This should have created the base folder structure which can be opened in VS Code (or any other editor).

Add and setup the PostgreSQL database

To actually use PostgreSQL, we need to have a database in place. For this, we rely on docker. Simply create a docker-compose.yml file in the root folder of the project and insert the following data:

version: '3.1'

    image: postgres:alpine
    restart: always
      POSTGRES_PASSWORD: postgres
      - '5432:5432'
    image: adminer
    restart: always
      - 8080:8080

We basically define two docker containers, one for the PostgreSQL database itself and one for adminer, a web based tool to access the database.

UPDATE (22.11.2020)
With version 0.0.14, cds-dbm is able to create the database during the deployment automatically, thus the steps to login to adminer and to create the database by hand are not required anymore. Just add the --create-db flag when running the deployment:
npx cds-dbm deploy --create-db`

To create the database, just open the browser and access the adminer interface at http://localhost:8080. Login with the following credentials (these will also be required for cds-pg later):

  • Server: db (this is the name of PostgreSQL service in the docker-compose.yml file)
  • User: postgres
  • Password: postgres

In the adminer interface, just create a new database and give it the name devtoberfest. And now we are ready to go.

Include and configure cds-pg and cds-dbm

The next step contains the integration and configuration of the Node.js modules. Since both are standard NPM modules, they can easily be installed via:

npm i cds-pg
npm i cds-dbm

The configuration is the more complex part, but also sticks to the default CAP rules of providing configuration in the project descriptor file (package.json).
The following snippet shows the required parts (the full example can be viewed here):

"cds": {
    "requires": {
      "db": {
        "kind": "database"
      "database": {
        "impl": "cds-pg",
        "model": [
        "credentials": {
          "host": "localhost",
          "port": 5432,
          "database": "devtoberfest",
          "user": "postgres",
          "password": "postgres"
    "migrations": {
      "db": {
        "schema": {
          "default": "public",
          "clone": "_cdsdbm_clone",
          "reference": "_cdsdbm_ref"
        "deploy": {
          "tmpFile": "tmp/_autodeploy.json",
          "undeployFile": "db/undeploy.json"

The cds.requires part is basically standard CAP and defines, that there is a db service of the kind database (freely chosen name) and the configuration of that service. The only difference to a native (meaning HANA/SQLite) project is the specific naming of cds-pg as the service implementation.

The additional cds.migrations section is currently required by cds-dbm. An explanation of the various configuration options can be found in the cds-dbm documentation.

Start developing the application

Since now the project is setup, we can start building the actual service and application. The good news is, that due to the abstraction of CDS, you can build the application in almost the exact same way, you would be doing with SAP HANA/SQLite… with some exceptions:
Since it’s currently not possible to hook into the cds command, the following commands are not yet supported:

  • cds watch (no equivalent present)
  • cds deploy(use cds-dbm deploy instead, more details on this below)

The first thing we should do is to define our data model. We start by adding the file db/data-model.cds and create an entity:

using { cuid } from '@sap/cds/common';
namespace db;

entity Projects: cuid {
  name : String(150);
  language: String(150);
  repository: String;

Next up, we need a service, that exposes the entity. Therefore just create the file srv/public-service.cdswith the following content:

using db from '../db/data-model';

service PublicService {
    entity Projects as projection on db.Projects;


Since we now have defined our initial data model and service, we need to deploy the corresponding db artifacts to the PostgreSQL database. As mentioned above, cds deploy cannot be leveraged. Instead, we need to make use of the tasks cds-dbm provides. A full description can be found in the official documentation.
To deploy the changes, simply call the following cmd from the command line (npx is required as of now):

npx cds-dbm deploy

To verify, that the database tables have been created properly, just go back to the adminer and refresh the schema. You should see, that all the tables and views are available. You will also find two additional tables databasechangelog and  databasechangeloglock. These are automatically generated by the internally used library liquibase and should be left untouched.

Creating and consuming data

It’s now time to startup the server and add some data. At first, we want to use the service API to add data to the database. Therefore we start the server by typing:

cds serve

The service endpoints should now be available at: http://localhost:4004/public/
To create the data, we will leverage the VS Code REST Client plugin, because it is easy to setup and use. If you are not using VS Code, you can also use Postman, cUrl or other tools.

If you have the REST Client plugin installed, just add a file test/projects.http with the following content and send the request to your API.

### Create entity
POST http://localhost:4004/public/Projects
Content-Type: application/json

    "name": "cds-pg - PostgreSQL adapter for SAP CDS (CAP)",
    "repository": "",
    "language": "Node.js"

When you open the projects in the browser (http://localhost:4004/public/Projects), you should see the inserted project, delivered right from the PostgreSQL database.

Loading data via .csv files

Another way to insert data into the database is by using .csv files (see official documentation). cds-dbmalso has support for .csv files. While the .csv files are automatically loaded in native CAP scenarios (with SQLite and SAP HANA) during deployment, the data loading must be explicitly triggered in cds-dbm.

But first, we need to create the file db/data/db-Projects.csv and add the following content:

c682de2f-536b-44fe-acdd-6475d5660ca2,cds-dbm - Database deployment for PostgreSQL on CAP,,Node.js
5d1e6d61-3ad5-4813-9a67-1fd9df440f68,abapGit: Git client for ABAP,,ABAP
b6c36859-84c1-486b-9f32-a6a25513f3ba,abap-dev-utilities: ABAP Development Tools,,ABAP
2d7c145e-fd40-492f-8499-2dd21e3cf0fc,vscode_abap_remote_fs: Remote filesystem for ABAP systems,,ABAP

To actually load the data into the database, you have two options:

// only load .csv data
npx cds-dbm load --via delta  

// deploy data model and load .csv data
npx cds-dbm deploy --load-via delta  

By specifying the load parameter to delta, cds-dbmdoes not remove existing data from the corresponding table, but only adds missing or updates altered rows. If the load parameter is set to full, then the target table will be truncated and the data from the .csv file loaded into the empty table (the only supported mode in native CAP).

So if you executed one of the above commands, then you should now see the data in your exposed API (http://localhost:4004/public/Projects).

Applying changes to the data model

As a last step, we now want to enhance the data model and add support for votes. Thus, we update the following files:


using { cuid } from '@sap/cds/common';
namespace db;

entity Projects: cuid {
  name : String(150);
  language: String(150);
  repository: String;
  votes: Association to many Votes on votes.project = $self;

entity Votes: cuid {
  username : String(150);
  createdAt: DateTime;
  project: Association to one Projects;


using db from '../db/data-model';

service PublicService {
    entity Projects as projection on db.Projects;
    entity Votes as projection on db.Votes;

cds-dbm has support for automated delta deployments (like native CAP on SAP HANA) and only applies the changes from your current CDS model to the database (unlike native CAP on SQLite, which drops the whole data model on every deployment). Before we do the deployment, let’s examine the changes on the database level by using the diff command:

npx cds-dbm diff

This shows a detailed list of all changes between the current state of the database and your cds model. Basically, the Votes table and the corresponding view are missing, as to be expected.

To deploy the changes, we again use the cds-dbm deploy task, via:

npx cds-dbm deploy

All the additional entities should now be available in your service. Please restart the server (cds serve) and refresh your browser. When you closely look at the Projecs entity (http://localhost:4004/public/Projects), you will see, that all the data in the db_projectstable is still there, even the data that has been inserted via API. This is because cds-dbm does not only support delta deployments on a schema level, but is also able to keep your data available in tables, that will be filled with default (.csv) data).

A short summary

If you have followed this post until the very end, you should now have the basic understanding, that CAP on PostgreSQL is no more only an idea, but it’s a real thing. By leveraging cds-pg and cds-dbm, you are able to use the power of CAP in combination with PostgreSQL as a database. And while there are some differences in the development workflow, things pretty much stick to the native feeling of working with CAP.

What’s next?

Update: 31.11.2020
With the latest developments it is possible to deploy and run your application on SAP BTP Cloud Foundry. Checkout this blogpost for more details.

But even when there is already lots of functionality available, CAP on PostgreSQL is not done yet and needs more support (draft functionality, multitenancy support…).

From here on, there are multiple paths:

Start using it

Even if the libraries are not mature yet, feel free to enhance the devtoberfest application or start building your own projects. If you have feature requests or encounter bugs/errors, please add them as issues to the respective github projects of cds-pg or cds-dbm and help us, making the libraries more mature.

Jump in and start contributing

Engaging in Open Source projects can be seen as spending not payed time on stuff, that others use. But it can also be seen as taking part in and shaping projects, that have a real impact on things, that matter to you (and your company). It is also a thing to get in contact with great people, learn from others, be supportive and make yourself a name in the (SAP) community.
So if you are liking the concepts behind CAP and also the idea of running CAP on PostgreSQL (or any other DB), then jump in and start contributing:

A final word to SAP

CAP is awesome. And it would be even more awesome, if it would be more open. I think it’s understandable, that CAP cannot be completely open sourced, because for SAP it needs to have the focus on SAP related technologies. But it definitely can be more open, by having well defined and better documented APIs.

Come on SAP, take a look at what we, the SAP community, are able to come up with…and we can even do better, if you let us by opening things up…

You must be Logged on to comment or reply to a post.
  • Great blog, thanks Mike!

    We have been thinking for a while now to get away from HANA as it is by far our largest expense for running our SaaS applications, but just haven't had the time to figure out the integration/deployment/etc. to any other type of DB. The blog and projects you have put together are now scheduled for my next weekend hack!

    And yes - 100% agree with your sentiment re: CAP being awesome and SAP needing to open it up more. I have spent a LOT of time troubleshooting and learning things that could have been easy fixes with better documentation.

    • This is not easy to be answered with the lack of more details. In general, PostgreSQL is a very mature Open Source database used by many small and large companies around the world. From a technical standpoint, PostgreSQL is able to handle billions of rows of data (e.g. the size limit of a table is 32 Terrabyte), but, as with every other database, you need to think about vertical or horizontal scaling.
      And I would also add financial scaling if you want to rely on a managed PostgreSQL service (e.g. the PostgreSQL Hyperscaler Option), since storage and memory usage comes with a price.

  • Thanks Mike for this illustrative blog and thanks for contributing this amazing module that brings us closer to PostgreSQL as a new option for CAP projects.

    Big thanks also to Gregor, Volker and rest of contributors for spending time on this.

    Best regards,


  • Hi Mike,

    Do you have any advice on how to handle user authentication and authorization with CAP and Postgres?

    Thanks and regards.

    • As long as you run the app on SAP Cloud Platform - Cloud Foundry Environment it's not different as when you use HANA. If you're looking for other platforms you might find some value checking out my repository: cap-azure-ad-b2c.

      • Hi Gregor,


        Thanks a lot for your reply. I want to use SAP CAP in a non SAP system. If you can guide me please from where to start or what approach should I follow it will be very helpful.

        As I understood from Christian Georgi from this thread SAP license for CAP is allowing the use of this technology outside the SAP ecosystem without any support from SAP.


        Thank you very much for your help.

  • Great job!

    I think that openui5 & cds-pg is the perfect marriage in web dev space.


    Thank you for your effort and time!



  • /
  • Добрый вечер!
    Несколько раз пытался воссоздать с нуля (более того, я перешел по вашей ссылке на StackOverFlow, остановил процесс brew postgresql), но все время застреваю в каких-то ошибках - прикрепляю скриншоты

    PS Одна из них сотрудничает с неправильным паролем - что странно, я использую все, как ваше, в учебнике. Единственное, что я изменил, так это имя db (вместо Projects - поставьте "Компанию")

    И можете раз вы ответили - ли вы мне сказать, как, например, привязать базу данных из другого облака? Вам тоже нужно развернуть его в докере?

    Спасибо заранее!

    P.S.S. Tried it from another macbook - with the command "npx cds-dbm deploy" and "npx cds-dbm deploy --create-db" it catches the error "Error: connect ECONNREFUSED
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js: 1113: 14) ", I also attach a screenshot.
    However, the next guide on deployment in cf turned out successfully (though now the question is how to connect a third-party database to postgresql from another cloud, and not insert data via .csv)

    • Hi,

      I've just updated my sample project pg-beershop to the latest cds version 5. Also I've upgraded my Node.JS runtime to 14 as recommended in the CAP release notes for the Spring 2021 release. It's working just fine running it locally. Please get the project and follow the description in the readme. The readme also shows different options to deploy to different Cloud Providers.

      Best regards

  • Thank you Gregor Wolf и Mike Zaschka !

    I collected a project for this tutorial and published it in cloud foundry as follows, but my question is:
    If in this tutorial the data can be added to the adminer, and they will be displayed on the local host - how can the same data be transferred to deploy to Cloud Foundry services? Is it possible? Or just by adding .csv models?

    Thanks in advance!

    • You should use the initial data provisioning using CSV files only for development. The CSV files will reset the sontent of your database with each deployment. Use the generated OData API's to fill the data. You can create a client for API Access. Check out bookshop-demo#allow-api-access.

      • Okay.
        I looked at your link (on bookshop), created a tests folder with two files:
        one). xs-security.json;
        2). test.env (where I filled in the data that came after executing the commands in the VScode terminal (the data is as follows:

        { "apiurl": "", ... )).

        How can you bind the generated ApiOdata and what in this context is meant as ApiOdata?
        Thanks in advance!

  • Hi,

    Great work on this Gregor Wolf and Mike Zaschka! I have one challenge with table names getting too long when using several levels of compositions. PGSQL has a limit of 63 characters (half of HANA to my knowledge).

    Table names are generated by cds-dbm as


    which runs out if there are many levels / long names.

    I can of course manage this by shortening the field names in the entity model, but since that also makes the API services less clear I was wondering if there is any way around this (e.g. with annotations forcing the name at the database level).

    Thanks in advance!


  • Hi Mike Zaschka

    great Post! In reference to that I would like to understand the differences (if they exist) between PostGreSQL connection with Node.js and Java. Are there pros and cons for both programming languages? Is there an overview available? Does the CAP connection to postgresql work better with Node.js or Java?

    I would be very grateful to receive some insights.

    Best Regards


  • Hi Gregor Wolf

    thanks for the quick response!

    does that mean PostGreSQL only works with Node.js and not Java? or the integration is at least easier with Node.js as it just requires the two modules cds-pg and cds-dbm?


    Best Regards


    • cds-pg is pure NodeJS as it builds on the NodeJS adapter for PostgreSQL. cds-dbm might help you even in Java for the Database migrations. In Java it might even be simpler as you could try to change to another Database with a JDBC driver. For the SAP Supported databases check the CAP documentation on Database Support.

    • Hi Max,

      in the officiation documentation on the Java SDK you will find some comments on PostgreSQL:

      There seems to be some kind of official support, but there are also some limitations. Since I haven't worked with the Java SDK, I cannot further comment on this. And I also do not know, if there is a blog post or any further information available on how to use CAP Java with PostgreSQL.

      As Gregor mentioned, this post and cds-pg/cds-dbm are all about the Node.js SDK. There is no official support for PostgreSQL on the SAP side, but we, as a Community, implemented the adapter as an Open Source initiative.
      The adapter currently also has its limitations, but you can easily jump start a project and, since it's all Open Source, also contribute, if you find missing features or bugs.