Skip to Content
Technical Articles

Cloud Integration – A Simple Demo Scenario Using the JDBC Adapter

The main job of SAP Cloud Platform Integration (CPI) is to pass through and process data from connected systems and, that way, to integrate business processes that span multiple applications or systems.

The built-in options to store data are mainly covering temporary data storage (like the JMS adapter or the data store where the maximum retention time is 180 days). However, in many scenarios it might be required, nevertheless, to also permanently store data. With the JDBC (Java Database Connectivity) adapter developed recently, you can connect an integration flow with a database and execute SQL operations on the database.

In this blog, I will give you a step-by-step description of how to set up a simple integration scenario using a JDBC adapter.

If you are interested in a concise summary of all steps required to set up the connection to an ASE database system with CPI using the JDBC adapter, check out the following SAP Community blog: Configuring JDBC Adapter: A Step-by-Step Guide

The Scenario

In SAP Help Portal you find the description of a simple “getting started” integration flow that reads product information from an external data source (a Webshop) and processes that data further-on (see HTTPS Adapter: Example Integration Flow). In the scenario, an HTTP client sends a request to CPI that contains a productIdentifier value. In a subsequent OData call, product data for this productIdentifier value is retrieved and sent back to the HTTP client in the response (additionally, the integration flow sends the product information to an email account to showcase the Mail adapter). In this blog, I will show you how to enhance this scenario in order to include a JDBC database.

When you have finished modelling, the integration flow will do the following: Each time the integration flow has retrieved data for a specific product from the catalog, it stores a corresponding data record in a database table. Here, the following rule is applied: In case a corresponding record exists already in the table, nothing is written to the table. In case no such record exists, a corresponding entry is written (by the JDBC adapter).

In a subsequent step, the integration flow reads all records from this table and sends back the result to the sender (and forwards it in addition to an email receiver). With each new HTTP request (with a different productIdentifier value not yet used already), a new record is added to the database table.

That way, with each request you get back the actually requested product data in addition to the product data from all previous requests from the past (providing a “history” of past product requests).

The following figure shows the high-level setup of involved components.

Set Up the Connection to the ASE Database System

I assume that, as a prerequisite, you have been provided with an ASE database system on an SAP Cloud Platform subaccount (SAP Cloud Platform SAP ASE Service). You can find the database system and schema in SAP Cloud Platform Cockpit. Go to the related subaccount (that hosts the database system) and under SAP HANA / SAP ASE choose Database Systems.

Create a Database Schema

With the following steps, you first create a database schema.

  1. In SAP Cloud Platform Cockpit go to the subaccount that hosts the database system and choose SAP HANA / SAP ASE > Databases & Schemas.
  2. Enter a Database ID (for example: mydbschema). You need this ID later when generating an access token to grant access to the database schema).
  3. As Database User enter a user name (for example, mydbuser). You need this name in another later step when configuring the JDBC data source on the CPI tenant. Enter a password for the database user (which you also need to notice for later reference).
  4. Click Create.

Grant the CPI Runtime Node Access to the Database

As you like to access this database from a CPI runtime node (where the integration flow is deployed), you need to grant the subaccount (associated with the CPI runtime node) permission to access the database. To do that, you first need to generate an access token.

To generate the access token, you need an SAP SDK command line tool that you can download from the SAP Cloud Platform Tools page: https://tools.hana.ondemand.com/ (download the following SDK: Java EE 6 Web Profile).

When you have downloaded the SDK, make sure that the environment variables of your computer are set so that the PATH variable points to the folder that contains the SDK. The following figure shows an example configuration for Windows.

Before you can generate the access token with the SDK command line tool, you need to find out certain parameters:

  • The name of the subaccount that hosts the database: In SAP Cloud Platform Cockpit, go to the subaccount and choose Overview. You can find the name under Technical Name.
  • The ID of the database schema. To find this information, in SAP Cloud Platform Cockpit, go to the subaccount that hosts the database and choose SAP HANA / SAP ASE > Databases & Schemas.
  • The technical name of the provider subaccount of the CPI runtime node: In SAP Cloud Platform Cockpit, open the subaccount that hosts the CPI runtime node and choose: Applications > Subscriptions.
    Look for the row that contains an entry ending with iflmap.
    The entry in that row in column Provider Subaccount is the provider subaccount of the CPI runtime node, whereas the entry in that row in column Application is the CPI runtime node application name. You need both entries later when generating the access token.

Open the command line tool and enter the following command (with the values for the parameters in angle brackets as explained above):

neo grant-schema-access --account <subaccount name> --host hana.ondemand.com --user <user required to access the subaccount that hosts database> --id <Id of the database schema> --application <technical name of provider subaccount of the CPI runtime node>:<CPI runtime node application>

Example:

neo grant-schema-access --account abcd12345 --host hana.ondemand.com --user user1 --id mydbschema --application defghi:a1234iflmap

Enter the password of user1. Note that this is the user to access the subaccount, not the one defined for the database schema (in our example, mydbuser). The latter will be required at a later step.

The access token is provided by the command line tool:

Access token to bind application ‘defghi:a1234iflmap' to schema ‘mydbschema’: <access token>

Copy the access token to the clipboard or a text editor (you need it later when creating the JDBC data source on the CPI tenant).

Create the JDBC Data Source (CPI Tenant)

To connect an integration flow with the JDBC adapter to a data source, you need to create a JDBC data source artifact first. This artifact which will be used to store the access data for the database (as defined and generated with the previous steps). When later-on designing the integration flow, in the JDBC adapter you then only need to point to the alias of the JDBC data source. No additional configurations are then required in the integration flow.

  1. Open the Web UI for the CPI tenant (https://<tenant address>/itspaces).
  2. Go to the Operations view and choose the tile JDBC Data Sources.
  3. Choose Add.
  4. Specify the following settings:
    Parameter Description
    Name

    Enter any name of choice for the artifact. You need to specify this name later in the JDBC adapter.

    Note that the name needs to be unique across the tenant.

    Description Enter any description of your choice.
    Database Type Select Cloud ASE.
    User Enter the user defined for the database schema (mydbuser in our example).
    Password Enter the password specified when defining the database schema.
    Database ID Enter the name of the database schema (mydbschema in our example).
    Access Token Enter the access token provided by the SDK command above.

  5. Click Deploy.

You have done now all work to enable your CPI tenant to connect to the database. As a next step, you create certain integration flows that access the database. All you have to do in order to connect to the database is to refer to this JDBC data source artifact. The required credentials and access token are stored in the artifact and you don’t need to re-enter these again.

Initially Create the Database Table

Let’s first check if the connection to the database works. To do that, create a simple integration flow that does nothing else than creating a database table in the ASE database. To use synergies, you create with this integration flow the table that is required for the demo scenario later-on.

For the demo scenario, you need a database table that contains the following columns (to store product ID and name for each product retrieved from the product catalog):

  • productId
  • productName

Let’s get started.

  1. In the Design view of the CPI Web UI, open any integration package and create a new integration flow.
  2. Model the following integration flow:
  3. For the Timer event, under Scheduler, select Run Once.
  4. Click on the JDBC adapter and as JDBC Data Source Alias enter the name of the artifact just created before. Keep the other default settings.
  5. Now configure the Content Modifier.

At this step, let me point out the following: As soon as you have connected a CPI tenant with a JDBC data source, you can inject any SQL statement you like through the integration flow. In this example, you use a Content Modifier to create a message body that contains a simple SQL statement to create a database table. Note that CPI also supports the SQL XML format that was already introduced with the SAP Process Integration JDBC adapter (more information in the documentation sources mentioned at the end of this blog).
Click the Content Modifier shape, go to the Message Body tab and enter the following:

CREATE TABLE products (productId varchar(255), productName varchar(255))

With this command, a table with name products and two columns (productId and productName) is created. I propose to use the varchar data format as the values of the fields from the product catalog can have variable length. The following figure shows the Content Modifier settings.

After you have saved and deployed the integration flow, the Timer start event will trigger exactly one message processing run during which the table will be created. You can check in the Operations view under Monitor Message Processing if everything went well.

Create Integration Flow to Access the Database Table

Our demo integration flow will process data (retrieved from a product catalog) in the way as illustrated in the figure at the beginnig of this blog post.

To implement this scenario, you now modify the example integration flow (described under HTTPS Adapter: Example Integration Flow) by adding steps to include access to the database table.

  1. Model the following integration flow (as described in the above-mentioned documentation section):
  2. To keep it simple, configure the OData receiver adapter (through which the product data is retrieved from the Webshop) that way that you enter this simple select statement (as you are only interested in the product attributes productid and Name):
    $select=ProductId,Name&$filter=ProductId eq '${header.productIdentifier}'

    The figure shows the OData adapter configuration (Processing tab):

    
    

  3. Add another Content Modifier shape after the Request Reply step (that sends the request to the Webshop). In the Message Header tab of the Content Modifier, add a header with the following parameters:
    Parameter Enter or select the following …
    Name Enter: productName
    Type Choose: XPath
    Data Type Enter: java.lang.String
    Value Enter: //Name

    The defined header will contain the value of the Name field of the response retrieved from the Webshop (for the given productIdentifier).

  4. Right to this Content Modifier, add another Content Modifier and in the Message Body tab enter:
    IF NOT EXISTS (SELECT 1 FROM products WHERE productId = '${header.productIdentifier}')
    
    BEGIN
    
    INSERT INTO products (productId, productName) VALUES ('${header.productIdentifier}', '${header.productName}')
    
    END

    The figure shows the Content Modifier settings:

    
    

    This SQL command checks first if (for the actually provided productIdentifier value contained in the message header) a record already exists in the table and, if this is not the case, updates the table by adding the product record associated with this productIdentifier value.

  5. To finally operate this SQL statement on the database table, add a Request Reply step next to the Content Modifier and connect it with an additional receiver through the JDBC adapter. In the JDBC adapter, as JDBC Data Source Alias enter the name of the artifact just created before.

The integration flow should now have the following appearance:

Finally, model an additional database access step where the whole content of the table is read out.

  1. First, add another Content Modifier where you specify the SELECT statement.
    SELECT productId,productName FROM products

    The figure shows the Content Modifier settings:

    
    

  2. To finish this step, add another Request Reply step and connect it to an additional receiver – again, you guess it, with a JDBC adapter that is configured in the same way.

The integration flow should now have the following appearance:

Let’s run the integration flow to see if it works. As explained under Send the HTTP Request and Process the Integration Flow, provide a JSON message with a productIdentifier value HT-1080 (for example, ), and your HTTP client (I used Postman which you can download from https://www.getpostman.com/) will get back a response such like the following one (and the email receiver will get a corresponding message as well):

You notice that the JDBC adapter provides the content from the database in a specific format as described under Defining XML Documents for Message Protocol XML SQL Format.

Now, post another HTTP request, and as value for productIdentifier enter HT-1081. You will get this response.

If you do again a call with productIdentifier specify HT-1081, you will get the same response (as the integration flow checks if the record exists and only updates the table in case not).

More Information

I would like to thank Suyash Mani Sharma for providing valuable help and guidance through this topic.

8 Comments
You must be Logged on to comment or reply to a post.
  • Hello Peter,

    Great blog, was able to use the instructions to access an ASE database. Facing one challenge. We have multiple databases (DEV, QA). If we deploy relevant JDBC sources (one for DEV, one for QA), we find a strange behaviour. the CPI iFlow always gets data from the first JDBC source that was deployed.

    I use the iFlow with source = “DEV” or source =”QA”, the data is always retrieved from ASE database “DEV”. Bindings were created correctly in the cockpit.

     

    Are we missing something ? It would be great if you could add this scenario as well in your blog.

    Regards, Parag

    • Hi Parag,

       

      Are the iflows different for the two data sources(DEV and QA)?

      What do you mean by “bindings were created correctly in the cockpit”?

       

      Regards,

      Suyash

      • Same iFlow. For eg. ECC-DEV > CPI > DB-DEV and ECC-QA > CPI > DB-QA.

        In cloud platform cockpit, we can view the bindings, these are created correctly.

    • Hello Nitish,

      thanks for the feedback and sorry for the delayed response!

      SAP CPI doesn’t run in trial accounts. Furthermore, the JDBC adapter only supports a database system provisioned in the same region like CPI. Therefore, the HANA trial database cannot be used with this adapter, unfrtunately.

      Best regards,

      Peter

  • Hi Peter,

     

    Thank you very much. It’s really very informative to me. But as of now when i’m in ASE db i’m getting warning like no DB available. Could you please suggest me any other 3rd party free ware DB that we can use for test purpose. Even i tried with Hanatrail DB but its not working. in the command like getting an error like below

    ERROR; Could not connect to host: https://services.hana.ondemand.com/services/v1/instances/abcd/persistence/v1/accesstoken/. Make sure your proxy settings are correct
    If you need help, provide the output of the command and attach the log directory [C:\Users\satish.goli\Desktop\CPI\neo-javaee7-wp-sdk-1.51.7\tools\log]