Technical Articles
Cloud Integration – A Simple Demo Scenario Using the JDBC Adapter
The main job of SAP Cloud Integration (CPI) is to pass through and process data from connected systems and, that way, to integrate business processes that span multiple applications or systems.
The built-in options to store data are mainly covering temporary data storage (like the JMS adapter or the data store where the maximum retention time is 180 days). However, in many scenarios it might be required, nevertheless, to also permanently store data. With the JDBC (Java Database Connectivity) adapter developed recently, you can connect an integration flow with a database and execute SQL operations on the database.
In this blog, I will give you a step-by-step description of how to set up a simple integration scenario using a JDBC adapter.
If you are interested in a concise summary of all steps required to set up the connection to an ASE database system with CPI using the JDBC adapter, check out the following SAP Community blog: Configuring JDBC Adapter: A Step-by-Step Guide
For those who are curious to know what else you can do with the JDBC adapter, as follow-up to this blog, I recommend to check out the following one: Cloud Integration – Using the JDBC Adapter to Implement a Robust Scenario that Handles an Exception.
The Scenario
In SAP Help Portal you find the description of a simple “getting started” integration flow that reads product information from an external data source (a Webshop) and processes that data further-on (see HTTPS Adapter: Example Integration Flow). In the scenario, an HTTP client sends a request to CPI that contains a productIdentifier value. In a subsequent OData call, product data for this productIdentifier value is retrieved and sent back to the HTTP client in the response (additionally, the integration flow sends the product information to an email account to showcase the Mail adapter). In this blog, I will show you how to enhance this scenario in order to include a JDBC database.
When you have finished modelling, the integration flow will do the following: Each time the integration flow has retrieved data for a specific product from the catalog, it stores a corresponding data record in a database table. Here, the following rule is applied: In case a corresponding record exists already in the table, nothing is written to the table. In case no such record exists, a corresponding entry is written (by the JDBC adapter).
In a subsequent step, the integration flow reads all records from this table and sends back the result to the sender (and forwards it in addition to an email receiver). With each new HTTP request (with a different productIdentifier value not yet used already), a new record is added to the database table.
That way, with each request you get back the actually requested product data in addition to the product data from all previous requests from the past (providing a “history” of past product requests).
The following figure shows the high-level setup of involved components.
Set Up the Connection to the ASE Database System
I assume that, as a prerequisite, you have been provided with an ASE database system on an SAP Business Technology Platform subaccount (SAP ASE Service). You can find the database system and schema in SAP Business Technology Platform cockpit. Go to the related subaccount (that hosts the database system) and under SAP HANA / SAP ASE choose Database Systems.
Create a Database Schema
With the following steps, you first create a database schema.
- In SAP Business Technology Platform cockpit go to the subaccount that hosts the database system and choose SAP HANA / SAP ASE > Databases & Schemas.
- Choose New.
- Enter a Database ID (for example: mydbschema). You need this ID later when generating an access token to grant access to the database schema).
- As Database User enter a user name (for example, mydbuser). You need this name in another later step when configuring the JDBC data source on the CPI tenant. Enter a password for the database user (which you also need to notice for later reference).
- Click Create.
Grant the CPI Runtime Node Access to the Database
As you like to access this database from a CPI runtime node (where the integration flow is deployed), you need to grant the subaccount (associated with the CPI runtime node) permission to access the database. To do that, you first need to generate an access token.
To generate the access token, you need an SAP SDK command line tool that you can download from the page: https://tools.hana.ondemand.com/ (download the following SDK: Java EE 7 Web Profile TomEE 7).
When you have downloaded and unpacked the SDK, make sure that the environment variables of your computer are set so that the PATH variable points to the folder that contains the SDK. The following figure shows an example configuration for Windows.
Before you can generate the access token with the SDK command line tool, you need to find out certain parameters:
- The name of the subaccount that hosts the database: Open SAP Business Technology Platform cockpit, go to the subaccount and choose Overview. You can find the name under Technical Name.
- The ID of the database schema. To find this information, in SAP Business Technology Platform cockpit, go to the subaccount that hosts the database and choose SAP HANA / SAP ASE > Databases & Schemas.
- The technical name of the provider subaccount of the CPI runtime node: In SAP Business Technology Platform cockpit, open the subaccount that hosts the CPI runtime node and choose: Applications > Subscriptions.
Look for the row that contains an entry ending with iflmap.
The entry in that row in column Provider Subaccount is the provider subaccount of the CPI runtime node, whereas the entry in that row in column Application is the CPI runtime node application name. You need both entries later when generating the access token.
Open the command line tool. Navigate to the tools folder of the SDK you have downloaded before. Using the command line tool, enter the following command (with the values for the parameters in angle brackets as explained above):
neo grant-schema-access --account <subaccount name> --host hana.ondemand.com --user <user required to access the subaccount that hosts database> --id <Id of the database schema> --application <technical name of provider subaccount of the CPI runtime node>:<CPI runtime node application>
Example:
neo grant-schema-access --account abcd12345 --host hana.ondemand.com --user user1 --id mydbschema --application defghi:a1234iflmap
Enter the password of user1. Note that this is the user to access the subaccount, not the one defined for the database schema (in our example, mydbuser). The latter will be required at a later step.
Important: As mentioned above, the access token (to grant access to the database) needs to be generated for the runtime node (on which, finally, the integration flow which connects to the database using the JDBC adapter is been processed). Therefore, make sure that in the command you specify the CPI runtime node application. For historical reasons, the corresponding ID ends with iflmap (in the example command, this is the entry defghi:a1234iflmap). Do not enter any application ID here that ends with tmn, because in this case you specify the tenant management node (tmn) application which is the virtual machine that acts as agent between the human user and the runtime components of SAP Cloud Integration. The tmn application is not responsible to process integration flows at runtime. If you enter the tmn application, you will get a deployment error later when trying to deploy the JDBC Data Source artifact with the token generated this way (as described below under Create the JDBC Data Source (CPI Tenant)). For more information on the architecture, see Technical Landscape (Neo). |
The access token is provided by the command line tool:
Access token to bind application ‘defghi:a1234iflmap' to schema ‘mydbschema’: <access token>
Copy the access token to the clipboard or a text editor (you need it later when creating the JDBC data source on the CPI tenant).
Create the JDBC Data Source (CPI Tenant)
To connect an integration flow with the JDBC adapter to a data source, you need to create a JDBC data source artifact first. This artifact which will be used to store the access data for the database (as defined and generated with the previous steps). When later-on designing the integration flow, in the JDBC adapter you then only need to point to the alias of the JDBC data source. No additional configurations are then required in the integration flow.
- Open the Web UI for the CPI tenant (https://<tenant address>/itspaces).
- Go to the Monitor view and choose the tile JDBC Material.
- Choose Add.
- Specify the following settings:
Parameter Description Name Enter any name of choice for the artifact. You need to specify this name later in the JDBC adapter.
Note that the name needs to be unique across the tenant.
Description Enter any description of your choice. Database Type Select SAP ASE service. User Enter the user defined for the database schema (mydbuser in our example). Password Enter the password specified when defining the database schema. Database ID Enter the name of the database schema (mydbschema in our example). Access Token Enter the access token provided by the SDK command above. - Click Deploy.
Tipp:
In case you have forgotten the access token, you can display it (without generating a new token) by using the following command (using the command line tool):
neo list-schema-access-grants --host <host> --account <subaccount_technical_name> --user <e-mail_or_user>
Example (with the same data as used above when generating the token):
neo list-schema-access-grants --host hana.ondemand.com --account abcd12345 --user user1
This might also be necessary in case at a later point in time you like to create a new JDBC Data Source artifact using the same database access data.
You have done now all work to enable your CPI tenant to connect to the database. As a next step, you create certain integration flows that access the database. All you have to do in order to connect to the database is to refer to this JDBC data source artifact. The required credentials and access token are stored in the artifact and you don’t need to re-enter these again.
Initially Create the Database Table
Let’s first check if the connection to the database works. To do that, create a simple integration flow that does nothing else than creating a database table in the ASE database. To use synergies, you create with this integration flow the table that is required for the demo scenario later-on.
For the demo scenario, you need a database table that contains the following columns (to store product ID and name for each product retrieved from the product catalog):
- productId
- productName
Let’s get started.
- In the Design view of the CPI Web UI, open any integration package and create a new integration flow.
- Model the following integration flow:
- For the Timer event, under Scheduler, select Run Once.
- Click on the JDBC adapter and as JDBC Data Source Alias enter the name of the artifact just created before. Keep the other default settings.
- Now configure the Content Modifier.
At this step, let me point out the following: As soon as you have connected a CPI tenant with a JDBC data source, you can inject any SQL statement you like through the integration flow. In this example, you use a Content Modifier to create a message body that contains a simple SQL statement to create a database table. Note that CPI also supports the SQL XML format that was already introduced with the SAP Process Integration JDBC adapter (more information in the documentation sources mentioned at the end of this blog).
Click the Content Modifier shape, go to the Message Body tab and enter the following:
CREATE TABLE products (productId varchar(255), productName varchar(255))
With this command, a table with name products and two columns (productId and productName) is created. I propose to use the varchar data format as the values of the fields from the product catalog can have variable length. The following figure shows the Content Modifier settings.
After you have saved and deployed the integration flow, the Timer start event will trigger exactly one message processing run during which the table will be created. You can check in the Operations view under Monitor Message Processing if everything went well.
Create Integration Flow to Access the Database Table
Our demo integration flow will process data (retrieved from a product catalog) in the way as illustrated in the figure at the beginnig of this blog post.
To implement this scenario, you now modify the example integration flow (described under HTTPS Adapter: Example Integration Flow) by adding steps to include access to the database table.
- Model the following integration flow (as described in the above-mentioned documentation section):
- To keep it simple, configure the OData receiver adapter (through which the product data is retrieved from the Webshop) that way that you enter this simple select statement (as you are only interested in the product attributes productid and Name):
$select=ProductId,Name&$filter=ProductId eq '${header.productIdentifier}'
The figure shows the OData adapter configuration (Processing tab):
- Add another Content Modifier shape after the Request Reply step (that sends the request to the Webshop). In the Message Header tab of the Content Modifier, add a header with the following parameters:
Parameter Enter or select the following … Name Enter: productName Type Choose: XPath Data Type Enter: java.lang.String Value Enter: //Name The defined header will contain the value of the Name field of the response retrieved from the Webshop (for the given productIdentifier).
- Right to this Content Modifier, add another Content Modifier and in the Message Body tab enter:
IF NOT EXISTS (SELECT 1 FROM products WHERE productId = '${header.productIdentifier}') BEGIN INSERT INTO products (productId, productName) VALUES ('${header.productIdentifier}', '${header.productName}') END
The figure shows the Content Modifier settings:
This SQL command checks first if (for the actually provided productIdentifier value contained in the message header) a record already exists in the table and, if this is not the case, updates the table by adding the product record associated with this productIdentifier value.
- To finally operate this SQL statement on the database table, add a Request Reply step next to the Content Modifier and connect it with an additional receiver through the JDBC adapter. In the JDBC adapter, as JDBC Data Source Alias enter the name of the artifact just created before.
The integration flow should now have the following appearance:
Finally, model an additional database access step where the whole content of the table is read out.
- First, add another Content Modifier where you specify the SELECT statement.
SELECT productId,productName FROM products
The figure shows the Content Modifier settings:
- To finish this step, add another Request Reply step and connect it to an additional receiver – again, you guess it, with a JDBC adapter that is configured in the same way.
The integration flow should now have the following appearance:
Let’s run the integration flow to see if it works. As explained under Send the HTTP Request and Process the Integration Flow, provide a JSON message with a productIdentifier value HT-1080 (for example, ), and your HTTP client (I used Postman which you can download from https://www.getpostman.com/) will get back a response such like the following one (and the email receiver will get a corresponding message as well):
You notice that the JDBC adapter provides the content from the database in a specific format as described under Defining XML Documents for Message Protocol XML SQL Format.
Now, post another HTTP request, and as value for productIdentifier enter HT-1081. You will get this response.
If you do again a call with productIdentifier specify HT-1081, you will get the same response (as the integration flow checks if the record exists and only updates the table in case not).
More Information
- HTTPS Adapter: Example Integration Flow (Help Portal documentation about the sample integration flow that is modified as described here to allow storing data in an ASE database table)
- Configuring JDBC Adapter: A Step-by-Step Guide (SAP Community blog providing a concise summary of all steps required to set up the connection to an ASE database system with CPI using the JDBC adapter)
- JDBC Receiver Adapter (Help Portal documentation of the JDBC adapter parameters)
- Defining XML Documents for Message Protocol XML SQL Format (Help Portal documentation of the SQL XML format supported by the JDBC adapter – both CPI and on-premise SAP Process Orchestration)
- Give Applications in Other Subaccounts Permission to Access a Database (SAP Help Portal topic explaining how to generate the access token for the database)
I would like to thank Suyash Mani Sharma for providing valuable help and guidance through this topic.
Hello Peter,
Great blog, was able to use the instructions to access an ASE database. Facing one challenge. We have multiple databases (DEV, QA). If we deploy relevant JDBC sources (one for DEV, one for QA), we find a strange behaviour. the CPI iFlow always gets data from the first JDBC source that was deployed.
I use the iFlow with source = "DEV" or source ="QA", the data is always retrieved from ASE database "DEV". Bindings were created correctly in the cockpit.
Are we missing something ? It would be great if you could add this scenario as well in your blog.
Regards, Parag
Hi Parag,
Are the iflows different for the two data sources(DEV and QA)?
What do you mean by "bindings were created correctly in the cockpit"?
Regards,
Suyash
Same iFlow. For eg. ECC-DEV > CPI > DB-DEV and ECC-QA > CPI > DB-QA.
In cloud platform cockpit, we can view the bindings, these are created correctly.
Hello Peter,
A great blog.
Can you please suggest if we can implement similar scenario using a HANA trial DB?
Thanks.
Best Regards,
Nitish Kumar
Hello Nitish,
thanks for the feedback and sorry for the delayed response!
SAP CPI doesn’t run in trial accounts. Furthermore, the JDBC adapter only supports a database system provisioned in the same region like CPI. Therefore, the HANA trial database cannot be used with this adapter, unfrtunately.
Best regards,
Peter
Thanks Peter.
Hi Peter,
does that mean we can also not combine CPI on Neo tenant with Hana on Cloud Foundry Tenant?
Also are there any plans to extend the JDBC Adapter to non SAP Systems and onPrem Systems?
Best regards
Hi Dominic,
The answer to your second question - SAP plans to release this feature in Q3 2020 release.
SAP roadmap link for this feature here -
https://roadmaps.sap.com/board?PRODUCT=42F2E964F4E71EDAA4EB06616C3F00E5&range=CURRENT-LAST#;INNO=6EAE8B28C5D91ED89DAF36D9A684C0CE
Is batch processing possible?
If yes , can you share a sample structure?
Hi Peter,
Thank you very much. It's really very informative to me. But as of now when i'm in ASE db i'm getting warning like no DB available. Could you please suggest me any other 3rd party free ware DB that we can use for test purpose. Even i tried with Hanatrail DB but its not working. in the command like getting an error like below
ERROR; Could not connect to host: https://services.hana.ondemand.com/services/v1/instances/abcd/persistence/v1/accesstoken/. Make sure your proxy settings are correct
If you need help, provide the output of the command and attach the log directory [C:\Users\satish.goli\Desktop\CPI\neo-javaee7-wp-sdk-1.51.7\tools\log]
try connecting using an Open network and make sure the logged-in user has admin rights to the SCP account.
Thanks.
Best Regards,
Nitish Kumar Shukla
Hi All,
Which user should i use while generating Token?
I tried with mu s-user but its not accepting the same.
(!) ERROR: Wrong credentials. Enter correct user name and password. Note that your user will be locked after 5 consecutive failed authentication attempts.
Thanks,
Pooja
Hello Pooja,
yes, it should be possible to generate the token with an s-user.
Have you checked if you provided the correct credentials?
It is also required that the s-user has administrator role of the subaccount. Maybe you like to check that.
Best regards,
Peter
Hi Peter,
I hope that you are well.
I tried generating an access token using my S-User but i got the error below.
(!) ERROR: Wrong credentials. Enter correct user name and password. Note that your user will be locked after 5 consecutive failed authentication attempts.
My S-User has the Administrator role assingned to it.
What could be the issue, please advise ?
If it happens that my User gets locked, how can i unlock it ?
kind regards,
Isaac
Dear Isaac,
without being sure that this is exactly the right answer: a possible cause of the error can be that you specified the wrong subaccount in the command. Please check if you have specified the "CPI runtime" subaccount as described in the blog in the information box starting with ...
"Important: As mentioned above, the access token (to grant access to the database) needs to be generated for the runtime node (on which, finally, the integration flow which connects to the database using the JDBC adapter is been processed)...."
(See text above for the complete instructions.)
I hope that this helps you to fix the error.
Best regards,
Peter
Hi Peter,
Thanks for the response.
I checked all the values on the command and they are correct including the Schema but i found out that on the SAP Cockpit in SAP HANA / SAP ASE under Database Systems, there’s nothing and when i try to create a new Database System it shows the message on the attached screenshot.
New Database System creation Error
Could the missing Database System be the cause of the problem ?
kind regards
Hello Isaac,
of course, this is an obvious reason that you cannot set up the connection. I fear I cannot really help here, but you might check out the documentation on initial setup about how to cerate a database system - for example:
https://help.sap.com/viewer/3fa880aa54b74110ae99ad01503fcd60/Cloud/en-US/b2a611485ed14fa1b832b1fd7b2fb82c.html
Best regards
Peter
Hi Peter,
Thanks for the response and i highly appreciate your help.
kind regards,
Isaac
Hello Peter,
This is very informative and a wonderful blog and has helped me in creating the integration flow and I am able to connect to hana DB (from CF) and create a table. Thanks a lot!
However, while executing the last bit to insert values in DB, I am getting below error:
Hello Indu,
thank you for your feedback!
Without having seen your setup, the possible can be the cause of the error (without guarantee): Maybe the SQL statement cannot be resolved because a dynamic parameter (e.g. ${header.productIdentifier}) is not resolved at runtime. In such a case, the SQL statement would not be complete. You can check if that is the case by running the integration flow with log level TRACE (see, for excample this topic ) and checking the message content at the actual step where the SQL statement is defined in the Content Modifier. (Set the log level in Monitoring app under Manage Integration Content, then run the integration flow, and under Monitor Message Processing click on Trace to see the integration flow model. Now, you can check for each step which message payload and headers are available at runtime.)
Did you check if the required headers are specified in the Runtime Configuration tab?
Let me know if this does not help.
Best regards
Peter
Hello Peter,
Thank you for responding.
I checked the trace and I can see the value of ProductIdentifier.
To test the syntax issue I tested with single SQL command in contentmodifier3 first with:
and Iflow got executed successfully but If I use more than one SQL command in content modifier3 it fails with same error "com.sap.it.rt.adapter.http.api.exception.HttpResponseException: An internal server error occured: SAP DBTech JDBC: [257]: sql syntax error: incorrect syntax near "INSERT": line 2 col 1 (at pos 111)."
Could you please advise?
Thanks,
Indu Khurana.
Hello Indu,
I fear I cannot give a final answer now, but I assume that it should not be possible to inject multiple SQL commands to the database in one step. Note that the Content Modifier, at runtime, represents one processing step in an integration scenario. In such scenarios like described in the blog, the Content Modifier defines an SQL command that is injected in a subsequent step (Request Reply or Send step - followed by one JDBC channel) to the database system. To have multiple SQL commands fired in sequence, I assume, that you also need to design your integration flow in a way that multiple Content Modifier are processed (by the Cloud Integration runtime) in sequence. But I have not had time yet to test this systematically.
However, you can construct complex SQL statements (e.g. containing if/else and other logical operations) in one SQL command (in one Content Modifier).
Does this help to solve your problem?
Best regards,
Peter
Hello Indu,
confirming my assumption above, the JDBC adapter does not yet support batch jobs (I got this confirmed by one of our development experts).
Best regards
Peter
Hello Peter,
Thank you for your response.
I tried the complex SQL statement containing if/else, like you have done in your blog, in one SQL command, but it failed. Am I missing something?
Please advise.
Thanks,
Indu.
Hello Indu,
did you use the command as shown in the blog above? Sometimes there can be an issue if you use the wrong character for a parenthesis ('). And you also checked that the dynamic parameters are resolved correctly? Is there an error message raised by CPI?
Best regards
Peter
Hi Peter,
That’s a really nice blog on the DB Connection. We are facing one roadblock on creating the Access Token for the Oracle DB connectivity. After the SAP releases the update on the On-Premise DB connection from the CPI, we have to move our two layer middleware architecture to CPI only system.
We added the TCP entry for the DB at the Cloud connector portal. But unclear about creating/generating the access token since its not an SAP HANA/ASE DB. Kindly request you to provide your inputs. After generating the Access token all the steps are clear to us.
The Access Token is the main roadblock at the moment.
Thanks & Regards,
Parimal Sarage
Hello Parimal,
thank you for your feedback!
As far as I am informed, the feature that you try to use (connecting to an on-premise DB) is not yet available, but it is planned.
Right now, you can connect CPI to a database on AWS.
Best regards
Peter
Thank you. This post helped alot.
The access grant and deployment of jdbc worked perfectly.
However, the correctly inserted jdbc string and shown Data Binding in the subaccount overview, is not connectable with a Connection refused Error -318 to the Hana Schema id on a subaccount within the Same Globalaccount.
This might be some HANA MDC related?
Thank you so much. Maybe CPI to HANA MDC is a completely different scenario.
Hi perter.
Thanks your blog.
Through this blog, I have confirmed that multiple SQL statements can be processed sequentially in one integration flow.
Furthermore, I looking for how to transaction handle for multiple SQL statement in one integration flow.
For example, I use same JDBC data source. (In one integration Flow)
First step, I try to insert A table(using one content modifier and request repply for jdbc adapter).
Next step, I try to insert B table however it will be occure exception.
In situation, I want to rolled back First step.
So, I try to use JDBC Data source option (autocommit=false) but i don't know how to commit operation in normal situation.
Would you like to answer my question?
Thanks you so much.
Dear Sang-Won,
Thank you for the question and the feedback!
When I get your comment in the correct way, you assume that transaction handling is ensured in such a way in your scenario that a certain database transaction is rolled back when there’s an error.
Exactly this behavior should be ensured when you have enabled transaction handling for your integration flow (click the Integration Process shape and in tab Processing check the setting for Transaction Handling; it should be Required for JDBC).
However, I do not get where you can do the setting autocommit=false. Can you elaborate a bit more on this?
Best regards
Peter
Dear Peter
Thank you for your interest.
Would you like to refer to the other question I posted?
I looking for how to handle transaction the JDBC receiver adapter.
(https://answers.sap.com/questions/13310777/can-sap-cpi-jdbc-receiver-adapter-support-transact.html?childToView=13312762#answer-13312762)
I had try to apply autocommit=false property as like below.
In this case, any SQL query thru the JDBC receiver adapter not committed to database.
Best regards
Dear Sang-Won,
sorry for the delayed response. In the meantime, I hopefully can contribute with some more clarifications.
As mentioned in the source you pointed out above, the Transaction Handling feature (Required for JDBC) that you can configure for the Integration Process does not relate to "remote" databases connected with the JDBC adapter. The Required for JDBC option relates to the database located on the Cloud Integration tenant (that means, for the data store content and the variables, for example). Whereas this configuration setting can't be applied to "remote" databases connected with the JDBC adapter.
However, you have the option to implement your desired transaction handling using integration flow modeling options such like the exception subprocess. That way, you can define the desired processing for the case that a certain database transaction (injected with the JDBC adapter on the database) fails. We don't have published any specific guidelines regarding transaction handling modeling for JDBC-adapter scenarios. But to learn more how to work with exception subprocesses in general, check out the design guidelines at: Handle Errors Gracefully.
I hope I could help you. Thanks again for your question!
Best regards
Peter
Dear Peter
Thank you very much for your kindly answer.
So, we're going to consider using pure java JDBC api in Groovy script for transaction handling.
However, It seems inefficient manage connection of JDBC(remote).
Best regards.
Hi Peter,
Is it possible to create stored procedure inside HANA DB and call from CPI?
Thanks,
Dinesh
Hello Dinesh,
yes, this is possible. I checked with the development team. Does this information help you?
Best regards
Peter
Hi Peter,
I am able to execute HANA Cloud DB Stored procedure by adding below code in content modifier.
CALL DVR.PROC_TEST_INSERT_FORCPI()
But, I am not able to execute a procedure with 'Out' parameter.
CALL DVR.PROC_TEST_INSERT_FORCPI(?) this is giving me below error.
Error Details
org.springframework.jdbc.UncategorizedSQLException: PreparedStatementCallback; uncategorized SQLException; SQL state [null]; error code [0]; Number of parameters mismatch. Expected: 1, was: 0; nested exception is java.sql.SQLException: Number of parameters mismatch. Expected: 1, was: 0, cause: java.sql.SQLException: Number of parameters mismatch. Expected: 1, was: 0
Thank you,
Hello Rajesh,
thanks for the feedback!
I have contacted the development team, and they gave me the following tipp.
Assume you like to call the following stored procedure:
You can call the stored procedure by
Here, “?” represents the OUT parameter.
Does this help solving your problem?
Best regards
Peter
Hi Peter Gutsche,
I try to connect to a PostgreSQL on SAP BTP - hyperscaler option database that is hosted on SAP BTP CF Trial via SAP CPI on the same trial account.
I get the error message:
org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is org.postgresql.util.PSQLException: Connection to postgres-04f21f76-85bf-4909-9476-b579cdc8893d.ce4jcviyvogb.eu-central-1.rds.amazonaws.com:4638 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.,
I am guessing I also need some connectivity between cpi and postgres. Do you have any or did someone get a postgres on btp to cpi connection running?
Hello Mark,
I did not setup a connection to a such a database system by myself (missing experience), but did you check if the URL pattern was correctly specified?
You can find more information in the online documentation.
If you have verfied and the problem remains, please create a ticket.
Best regards
Peter
Hi Peter,
thanks for your response. I think my connection string is at least like in the documentation for aws rds.
jdbc:postgresql://postgres-xxxxxfxx-85bf-4909-9476-b579cdc8893d.ce4jcviyvogb.eu-central-1.rds.amazonaws.com:4638/cMXXXXxxxZQ?ssl=false&loglevel=2
Is there a possibility to create a ticket with trial accounts?
Thank you so much for your help.
BR
Mark