Cloud Integration – How to configure Transaction Handling in Integration Flow
This blog discusses options how to configure transaction handling for JMS and JDBC transacted resources in your integration flow. It describes the different configuration options available, allowed combinations and existing limitations. It provides some sample configurations explaining when to use which option and why.
Transaction Handling Configuration in Integration Flow
Many integration scenarios in Cloud Integration use transacted resources, like data stores or message queues, that have to be executed in a single end-to-end transaction to ensure data consistency. Before describing the options we need to understand the basics and why transaction handling is required at all.
Some flow steps and adapters use persistency to store data in the database or in JMS queues. To ensure this is done consistently for the whole process a transaction handler is required, that takes care that the whole transaction is either committed or rolled back in case of an error.
For example, taking a simple scenario, during message processing a variable is written and afterwards some data is deleted from a data store in the same process. In case of an error in the later processing both steps are rolled back, so that neither the variable is stored, nor the deletion from data store is done. Without a transaction manager these steps would have been executed on single step level, even if the overall processing of the message ended with an error.
The transaction handler can either handle a JMS transaction or a JDBC transaction, not both. There is no configuration option available for distributed transactions between JMS and JDBC resources in cloud integration.
JDBC Transacted Resources
In cloud integration some flow steps and adapters use JDBC persistency, which requires a JDBC transaction handler to ensure transactional end-to-end processing. This is the case for the following flow steps and adapters:
- Data Store Operations (Write, Delete)
- Write Variables (Local and Global)
- XI Sender and Receiver Adapter using EO with Data Store as temporary storage (only with June 2018 release)
All the flow steps and adapters execute transactions on the database, to ensure end-to-end consistency of the data processed in the scenario, you configure a JDBC transaction manager for the flow steps. The XI Adapter only needs a JDBC transaction manager if used on receiver side in multicast scenarios (splitter, sequential multicast). Some specific considerations are described in section ‘Recommendations and Restrictions’ below.
JMS Transacted Resources
In cloud integration configuration, there are some adapters using JMS persistency. These adapters are only for cloud integration customers having an Enterprise License. This is the case for following adapters:
- JMS Sender and Receiver Adapter
- AS2 Sender Adapter
- XI Sender and Receiver Adapter using EO with JMS as temporary storage (only with June 2018 release)
Nevertheless, not for all of these adapters a JMS transaction manager is needed because the retry in the sender adapters is done independent of transaction handler configuration. Only for the JMS Receiver adapter a JMS transaction manager may be needed depending of the overall scenario. This is described in more detail in the section ‘Recommendations and Restrictions’ below.
Configuration Options in Cloud Integration
In integration flow transaction handling can be configured at two places, in main process and in local processes.
Configuration in Main Process
Normally the end-to-end transaction for a scenario is configured in the main process, to ensure the transaction is either committed or rolled back.
Select the integration process to get the properties for the process. In tab Processing you can configure the transaction handling.
There are three configuration options available, default value is Required for JDBC.
- Not Required: If you don’t use any transactional resources in your scenario or don’t need an end-to-end transactional behavior, select this option.
- Required for JDBC: If you use flow steps with JDBC transacted resources or several XI receiver adapters in a multicast scenario in the process, that need an end-to-end transaction, select this option.
- Required for JMS: If you use several JMS receiver adapters or JMS adapter in Send step, select this option. This option together with the JMS adapter is only available for Cloud Integration customers with an Enterprise License.
Configuration in Local Process
In case local processes are used in the integration flow, transaction handling can additionally be configured on this level as well.
Select the local process to get the properties. In tab Processing you can configure the transaction handling.
There are three configuration options available, default value is From Calling Process.
- From Calling Process: If you want to take over the setting from main process, select this option. So, if the main process has no transaction handler defined, also the local process will not get an end-to-end transaction. If the main process has a transaction handler defined, the local process will join the transaction from main process. For consistent end-to-end handling this is the recommended option.
- Required for JDBC: If you use flow steps with JDBC transacted resources in the local process, that need a transaction, it may be useful to select this option. For example, if the calling main process does not have transaction handling defined. Note, that in this case the transaction will be committed or rolled back after execution of the local process, not after execution of the main process. If the main process also has JDBC transaction manager configured, this option is equal to From Calling process, the transaction from main process will be joined.
- Required for JMS: If you use JMS adapter in Send step in the local process, it may be useful to select this option. For example, if the calling main process does not have transaction handling defined. Note, that in this case the transaction will be committed or rolled back after execution of the local process, not after execution of the main process. If the main process also has JMS transaction manager configured, this option is equal to From Calling process, the transaction from main process will be joined. This option together with the JMS adapter is only available for Cloud Integration customers with an Enterprise License.
Important: It is not allowed to configure JMS transaction in main process and JDBC transaction in local process and vice versa, because there is no support for distributed transactions in Cloud Integration.
Old Version of Processes without Transaction Handling Configuration
In the past, with the older version of processes, there was no configuration option available for transaction handling on process level. This means, if you open existing integration flows and select the process or local process, you may not have the option to configure the transaction manager. Within this version always JDBC transaction manager is used.
You need at least version 1.1 of the process or local process to have the configuration options. You need to add a new process into your integration flow and remodel the process to have the options for transaction handling configuration or use the migration of the process or sub-process, available with the 12-December 2017 update to avoid to remodel the whole process.
The migration is described in detail in Blog ‘Versioning and Migration of Components of an Integration Flow’.
Recommendations and Restrictions
There are several important restrictions existing for configuring transactions. Some flow steps are not supported with transacted resources, some need transactions, furthermore, specific combinations are not allowed or not recommended. Carefully check out the the listed restrictions and recommendations.
Data Store Operations and Write Variables
Data store operations and the writing of variables can benefit from a JDBC transaction manager, but can also be used without JDBC transaction manager. In this case the database operation is committed on single step level, no end-to-end transaction is hold in this case.
Note, that the ‘Delete After Fetch’ option in data store Get and Select is always executed at the end of the processing; the message is deleted after successful processing or not deleted if the message processing failed. This is independent of the transaction handling configured.
For the aggregator flow step it is mandatory to use a JDBC transaction handler because otherwise aggregations cannot be executed consistently. Because of this, you will get a check error if you configure aggregator without JDBC transaction manager.
JMS Receiver Adapter
If only one JMS Receiver channel without splitter or sequential multicast is used no JMS transaction handler is required because the JMS transaction can directly be committed. For such flows you should configure no transactions or, if required, JDBC transactions. But Note: in case of using a JDBC transaction manager in the integration flow with one JMS Receiver, in case of a failure during final commit in JDBC the JMS transaction may have already committed the data. This is because distributed transactions between JMS and JDBC are not supported. This behavior could lead to duplicate messages because the message will get a failed status and retry is normally done from sender. Make sure your scenario can handle such duplicates or do not mix JMS receiver with JDBC transacted resources in your integration flows.
In case several JMS Receiver channels or a sequential multicast or splitter followed by a JMS Receiver Adapter are used in one integration process a JMS transaction handler will ensure that the data is consistently updated in all JMS queues. If no JMS transaction handler is defined for these scenarios, a message processing error, which occurs after some of the messages are already written to a queue, can cause a large number of duplicate messages. In
If a JMS Receiver channel is used in a Send step a JMS transaction handler is needed to ensure that the data is consistently updated in the JMS queue at the end of the processing. In an error case the whole transaction will be rolled back if an unhandled error occurs in the integration flow processing after the send step.
If a JMS Sender Channel and one or more JMS Receiver Adapters are used in one integration flow you can optimizes the numbers of used transactions in the JMS instance using a JMS transaction handler because then only one transaction is opened for the whole processing. For such configurations select the JMS transaction handler.
JMS, XI and AS2 Sender Adapter
These adapters do not need JMS transaction manager to be configured, because the retry handling works independent of the selected transaction handler. You should select the transaction manager as required by your scenario configured in the integration flow, keeping in mind the following additional point:
If you use flow steps with JDBC resources (data store, variables, aggregator) together with JMS, XI or AS2 Sender you may select the JDBC transaction manager. But Note: in case of a failure during final commit in JMS (remove message from queue) the JDBC transaction may have already committed the data. This is because distributed transactions between JMS and JDBC are not supported. This could lead to duplicate messages because the message will stay in Retry in JMS queue, but JDBC resources have been updated. Make sure your scenario can handle such duplicates or do not mix JMS, XI or AS2 sender with JDBC transacted resources in your integration flows.
If a JMS, XI or AS2 Sender Channel and one or more JMS Receiver Adapters are used in one integration flow you can optimizes the numbers of used transactions in the JMS instance using a JMS transaction handler because then only one transaction is opened for the whole processing. For such integration flows select the JMS transaction handler.
XI Receiver Adapter
The XI Receiver adapter with Quality of Service as Exactly Once can be used with either JMS or Data Store for temporary message storage.
If only one XI Receiver channel without splitter or sequential multicast is used no JMS or JDBC transaction handler is required because the transaction can directly be committed. For such flows you should configure no transactions or, if required, JDBC transactions. But Note: in case of using a JDBC transaction manager in the integration flow with one XI Receiver using JMS, in case of a failure during final commit in JDBC the JMS transaction may have already committed the data. This is because distributed transactions between JMS and JDBC are not supported. This behavior could lead to duplicate messages because the message will get a failed status and retry is normally done from sender. Make sure your scenario can handle such duplicates or do not mix XI receiver with JMS storage with JDBC transacted resources in your integration flows.
In case several XI Receiver Adapters are used in a sequential multicast or a splitter followed by an XI Receiver Adapter used in one integration process a JMS or JDBC transaction handler (depending on the storage option used in XI adapter) will ensure that the data is consistently updated in all JMS queues/Data Stores. If no transaction handler is defined for these scenarios, a message processing error, which occurs after some of the messages are already written to a queue or data store, can cause a large number of duplicate messages.
If a XI Receiver channel is used in a Send step (available with 30-September-2018 update) a JMS or JDBC transaction handler (depending on the storage option used in XI adapter) is needed to ensure that the data is consistently updated in the JMS queue/Data Store at the end of the processing. In an error case the whole transaction will be rolled back if an unhandled error occurs in the integration flow processing after the send step.
If a JMS Sender Channel and one or more XI Receiver Adapters with JMS queue as temporary storage are used in one integration flow you can optimizes the numbers of used transactions in the JMS instance using a JMS transaction handler because then only one transaction is opened for the whole processing. For such configurations select the JMS transaction handler.
If the error happens in the process of sending the message to the receiver backend there is no rollback done, because the whole processing until storing the message in the temporary storage was successful. A retry is done from the temporary storage only, and this processing is done in a new transaction.
General and iterating splitter are not allowed with parallel processing switched on with transacted resources after the splitter, neither with JMS, nor with JDBC transaction. If splitter is needed with either JMS of JDBC transaction, do not set the flag for Parallel Processing.
Parallel multicast is not allowed with transacted resources within the multicast branch, neither with JMS, nor with JDBC transaction. If multicast is needed with either JMS of JDBC transaction, use the sequential multicast.
Guideline for Using Transactions
There are three important guidelines to follow:
1. Configure the transaction as short as possible!
Transactions always need resources on the used persistency, because the transaction needs to be kept open during the whole processing it is configured for. When configured in main process, the transaction will already be opened at the begin of the overall process, and is kept open until the whole processing ends. In case of complex scenarios and/or large messages, this may cause transaction log issues on the database or exceeds the number of available connections.
To avoid this, configure the transactions a short as possible!
2. Configure the transaction as long as needed for a consistent runtime execution!
As already explained, for end-to-end transactional behavior you need to make sure all steps belonging together are executed in one transaction, so that data is either persisted or completely rolled back in all transactional resources.
3. Configure only one transaction if multiple JMS components are used!
As already explained, If a JMS, XI or AS2 Sender Channel and one or more JMS Receiver Adapters are used in one integration flow you can optimizes the numbers of used transactions in the JMS instance using a JMS transaction handler because then only one transaction is opened for the whole processing.
4. Avoid mixing JDBC and JMS transactions!
Cloud integration does not provide distributed transactions, so it is not possible to execute JMS and JDBC transactions together in one transaction. In error cases the JDBC transaction may already be committed and if the JMS transaction cannot be committed afterwards, the message will still stay in the inbound queue or will not be committed into the outbound queue. In such cases the message is normally retried from inbound queue, sender system or sender adapter and could cause duplicate messages.
Either the backend can handle duplicates or you must not mix JMS and JDBC resources.
Check out the sample configurations below for more information.
Sample Scenarios using Data Store Operations
In the following chapter we will showcase some simple sample configurations using JDBC transacted resources and explain the recommended settings.
Scenario 1: Using Data Store in Main Process
This scenario uses a Timer to trigger the processing, a data store Get in a main process, then a Script is executed and afterwards the message is deleted from data store using Delete step and sent to the receiver. To ensure that the message is only deleted from date store if the message is successfully sent to the receiver, for this process the JDBC transaction handler is to be used.
Without JDBC transaction handler the message would have been deleted from data store on execution of the data store Delete step and not at the end of the whole process.
Scenario 2: Using Data Store in Local Process
In this scenario the timer is followed by an OData call to fetch some data and afterwards a local process is called where the message is stored in a data store via data store write step, a confirmation message is sent to the receiver system. Also the call to the receiver system is executed in the same local process.
For such a scenario you can either configure the JDBC transaction in the main process, or in the local process. The difference is, that when configured in main process, the database transaction will already be opened on the database at the begin of the overall process, and is kept open until the whole processing ends. In case of complex scenarios and/or large messages, this may cause transaction log issues on the database. This should be avoided.
So the recommendation would be, to
- ensure that all steps needed in one transaction, data store select and call to receiver, are contained in the local process and
- select Required for JDBC in local process,
- in main process select Not required.
The first steps are not needed in the transaction because nothing is persisted yet and in case of an error the next timer start event will trigger the processing again.
With this configuration the transaction on the database is kept open only as long as needed and not for the whole processing of the message.
Sample Scenarios using JMS Adapter
Scenario 3: Using Splitter with JMS Receiver Adapter in Main Process
In this scenario a splitter step is configured followed by a JMS receiver adapter in the main process. To ensure that the split messages are only persisted in the message queue if the whole message was split successfully, for this process, a JMS transaction handler is to be used.
Furthermore, you need to make sure Parallel Processing is not selected in splitter.
Without JMS transaction handler the single split messages would have been saved to the message queue on execution of each JMS receiver call and not at the end of the whole processing. This could lead to the status that some split messages are available in the queue, but the overall process ended with an error and the whole start message will be reprocessed leading to duplicate messages.
Scenario 4: Using JMS Adapter in Send Step in Local Process
In this scenario a JMS receiver is used in the local process in a send step to store messages into a JMS queue. This local process is reused in the main process at several places. To ensure the message is sent to the queue only if the overall processing in the main process ended successfully an end-to-end JMS transaction is needed. In the main process you need to select the JMS handler.
In local process the transaction of the main process needs to be joined.
Without JMS transaction handler it could happen that in error cases messages are persisted in the JMS queue even if the call to the receiver was not successful and the overall message status is failed.
Scenario 5: Using JMS Sender and Receiver Adapter in Main Process
A sample scenario using JMS sender and receiver adapter is described in detail in the blog ‘Configure Asynchronous Messaging using JMS Adapter’.
Scenario 6: Using XI Adapter in Main Process
A sample scenario using XI adapter is described in detail in the blogs ‘Configuring Scenario Using the XI Receiver Adapter‘ and ‘Configuring Scenario Using the XI Sender Adapter‘.
Hi Mandy Krimmel,
Excellent Blog on Transaction Handling use cases in Cloud Integration.
Hi Mandy Krimmel,
I have a custom Integration flow between SFSF and WFS which is getting executed for 1 hour 5 minutes and failing with below error message.
"org.springframework.transaction.TransactionSystemException: Could not commit JDBC transaction; nested exception is java.sql.SQLException: JZ0C0: Connection is already closed., cause: java.sql.SQLException: JZ0C0: Connection is already closed."
The Integration flow contains Agreegator/Variables/Write Operations. Hence the Transaction handling is defined as JDBC and timeout parameter is set to 120 minutes at Main Integration Process level. This doesn't solve the problem. Please help here to avoid this issue.
this needs to be checked in detail, e.g. are there sub-processes that have lower timeout, in which step does the timeout happen (check mpl details) etc.
Could you please open a ticket for the issue on LOD-HCI-PI-RT. Please attach the integration flow project and a message processing log with the error.
Logged ticket with SAP. Thanks.
Thanks for this detailed explanation. Very useful.
Thanks a lot Mandy ,nice blog with detailed use case scenarios
Hi Mandy Krimmel ,
Thanks for the very informative blog.
I have implemented transaction handling in one of my requirements where I am writing the data for the successful response from HTTP i.e 200- status OK. When there is a failure HTTP response i.e 404-not found. SELECT data store operation has to fetch the data written for all the successful cases.
As shown in the Scenario2: I have maintained Required for JDBC for the Local Integration Process and kept Not required in the Main Integration process. But, no luck.
After the SELECT step, I am not getting the complete Payload while processing it. I am getting the following payload only
<?xml version=”1.0″ encoding=”UTF-8″?><messages></messages>
Is there any limitation of SELECT data store operation or transaction handling in exception subprocess?
Please let me know if I am missing anything.
Trying to understand your scenario: the Fetch Invoices process has set JDBC transaction required, right? This means that the whole transaction is rolled back in error case. This means if you have some entries written with Write data they would be rolled back in error case and then for sure nothing is available in the data store.
Why would you expect that there are entries in the data store in error case if transaction handling is on?
Have you tried with transaction handling off in the local process?
But maybe I do not fully understand your scenario.
Nice Blog !
I have a requirement where in I have the main Integration process along two local integration process being called. The main integration process has SFTP Sender and Receiver configured
The first local integration process has Aggregator in it and I have to use Transaction Handling as JDBC.
Hence in the main process as well I have to use Transaction Handling as JDBC only. The second local integration process I have kept JDBC as well
When I try to run the entire scenario the output I am getting is an ID like this- a5******-6b13-****-a162-8cd9e284**** and not the output I am expecting
Please let me know what combination of Transaction Handling would be needed to be configure between the main process and the 2 local integration process
My 1st local Integration Process has Aggregator and the 2nd one has XSLT mapping
I do not understand the overall scenario, but here some recommendations:
Aggregator requires JDBC transaction handler, this is correct. But why also the main process, if the aggregator is used in local process. Wouln't it be sufficient to have it only in local process?
XSLT mapping does not need a transaction handler at all. So, this second local process does not need a transaction handler at all.
Maybe the transaction handling does have nothing to do with the not expected result at all, it may be simply the logic in the integration flow?
Probably the best would be you open a ticket on LOD-HCI-PI-OP-SRV so that the colleagues responsible for the aggregator could check.
When I use 'not required' in the main Integration Process it gives me an error stating 'The process requires a transaction because Aggregator is used'
Doesn't allow me to Deploy the flow-
Hi Mandy ,
Thanks for the details blog.
I just need clarification on the timeout field.
I have a simple flow (Transaction handling set to = Required for JDBC):
HTTP-->DataStore(Write) with timeout as 1 min ---> Script ( Using sleep function in script to wait for 5 mins )--> End.
Now as the whole process takes more than 5 mins which has already crossed the timeout mention for transaction handling, my expectation was after 1 min the flow fails with a timeout error but I see the flow successful after 5 mins. So need to know what is the significance of this "timeout" field .
you are right. The problem is that with the currently released version there is a bug in this timeout setting. With the next update end of May the timeout should work again as expected.
Is this bug not yet fixed? I can still see the same behavior even though the time taken by the flow to process is more than the timeout mentioned in transaction handling, the message flows continue and the data is still commited, with no error thrown.
And also I can see below description in the sap help document :
What exactly does this mean that no other operation is terminated? My understanding was if the flow takes more than the timeout mentioned, the flow will throw an error and go in "failed" status.
the timeout will only kick in if the transaction on the DB is still running, then the transaction is terminated and the processing ends. If this does not happen correctly in your execution I would suggest you open a ticket on LOD-HCI-PI-CON-SOAP so that the experts can have a look.
If only an execution, which is not part of the transaction, is taking longer this is still executed and not terminated.
Hi Mandy ,
Taking the below example :
HTTP-->DataStore(Write) with timeout as 1 min ---> Script ( Using sleep function in script to wait for 5 mins )--> content modifier -->End.
In the above flow , the flow should stop processing after 1 min and it should throw and error and jump into expections subprocess ( assuming we have Exception handling done ) the content modifier should not be executed . Is my understanding correct ? If yes , then this is not happening.
the timeout behavior is not so straight forward because we here heavily depend on the DB.
The timeout is triggered when a DB operation like write, get or delete is executed on the DB after the specified timeout. But not with the final commit or rollback.
This means if your scenario would be: HTTP--> Script with sleep -> Data store Write -> it would trigger the timeout at the DB write step because then the time from flow processing start to DB action is higher than the timeout.
But in your scenario it will not trigger the timeout because the DB write is before the sleep and only the final commit or rollback is executed after the sleep. And this does not trigger the timeout error.
I hope this clarifies?
Thanks for the blog!
Are there any information how this transaction handling is influenced/affected when using a Process Direct Receiver Adapter in the mix? Are there special considerations required?
Let's say, i use a JMS Sender channel to trigger the IFlow. At some point, a Process Direct Adapter is used to call another process, this target process uses a JMS receiver Adapter.
Do these separate Iflows share some transaction handling or are they completely separate? What if there are a couple of jumps (Process Direct Calls) in a row and at some point we reach a JMS Receiver?
Do you maybe have any input on that?
transactions are not shared across Process Direct calls, the second flow with a JMS receiver will open a new transaction.
thank you very much for the response, i would have an additional question:
As Process Direct is of a synchronous nature, i assume that the transaction in the Starting Flow will stay open until the Flow called via Process Direct is returning the response including all the steps which might happen after the PD call, correct? I guess this is the same behavior for all communication via either Request-Reply steps or calls via End-Events?
As you mentioned, the called flow has its own transaction handling, which is Independent of the calling Flow. As it will not be part of the transaction of the calling flow, I would understand that the second flow does not create a new transaction except it is set specifically or any of the mentioned special cases.
I am asking as we have had a discussion with some of you colleagues who have suggested to change one of our flows in this manner (TH = transaction handling).
The first flow receives an IDoc from SAP and puts it in a JMS Queue, the second processes the IDOCS from the JMS Queue.
I understand to have no JMS TH in the lower Flow, but i do not understand why we should set the first one to JMS TH?
In your blog, you stated:
Based on that i would say both Flows do not need JMS TH? Correct? Or is this behaving differently if you have multiple main Flows in an artifact such as in my example? I guess not.
Since at some point one of the called Flows will also have a JMS Receiver, I was also interested regarding the transaction handling across flows, but this you already answered. Thanks!
Thanks for your support and best regards,
let me try to clarify your doubt. Even if you do not explicitly configured a JMS transaction handler a JMS transaction is always created in case a JMS sender or receiver is used. It can either be a short one that consumes or stores the message (no JMS transaction handler configured) or a transaction that starts at the begin of the processing and lasts to the very end across all processing steps (JMS transaction handler configured).
From your picture above I would say that the first does not need a JMS transaction handler.
I am using XI sender Adapter with Datastore as a temporary storage (main flow), and this iflow is connected to other flows via Process Direct, the message is getting routed to different iflows based on the content.
So for this scenario, the transaction handling is "Not Required" correct ? I dont have splitters, multicast anywhere, not even in the next flows. Please clarify.
Correct, in this case transaction handling is not required.
Thanks for the response,
As mentioned, transaction handling is not required for "XI Sender Adapter with Datastore as temporary storage", then what happens if there are any errors ? as transaction handling is set to "Not Required", there are chances of data loss correct ?
Also, how is transaction handling related to DB pool connections which comes as "8" connections by default ? if we do not use transaction handling, can we avoid blocking DB connections ?
The reason of reporting this issue is, we are facing issues with our tenant where Proxy messages from ECC are getting failed in CPI system with below mentioned error, there are around 300-400 calls received from ECC per minute and this runs once in an hour.
We have raised this issue with SAP who suggested us to increase DB connections from 8 to 16 and memory from 4 GB to 8 GB.
The connections and memory is increased as suggested but still the issue persists.
They further asked us to check "Transaction handling" in the iflow which is set to "not required" (referred from this blog)
org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is java.sql.SQLTransientConnectionException: it-db-rt.l0696 - Connection is not available, request timed out after 30000ms., cause: java.sql.SQLTransientConnectionException: it-db-rt.l0696 - Connection is not available, request timed out after 30000ms.
Inbound processing in endpoint at /ProxyFromCGEToCPI failed with message "Fault:java.lang.IllegalArgumentException: Unable to query 0212af38-153f-1eec-9cf8-a1f18e566b21", caused by "SQLTransientConnectionException:it-db-rt.l0696 - Connection is not available, request timed out after 30000ms."
Your response is much awaited!
let me try to answer your questions:
Transaction handling seems to be not available for the JDBC Adapter. The title of the function "Required for JDBC" is quite misleading in this case... We would need it for the JDBC adapter that we use to read and update records from an on-prem DB. In such a scenario it's crucial to only update the read record once they're processed without error, otherwise skip the update statement in order to reprocess them in the next run. The same is standard on PI/PO.
Any inputs on that?
For JDBC adapter the atomic/non-atomic behavior is supported (transaction handling). Please refer the JDBC help documentation link -
Thank you for your answer. But that's not what I mean. I mean a transaction that stays open during the processing time of the whole IFlow, not only of the batch statement. So if I read a DB record and send the data to a mapping that fails, I must not update the record's status flag. Only if the processing of the iflow ends successfully, I am allowed to update the status flag.
In PI/PO this is not an issue since messages get automatically queued, and only if the message was queued (persisted) successfully, the DB update statement gets executed (in the same transaction, so there is also no risk of updating records that got newly added in the meantime by the sender).
In Cloud Integration, to achieve something similar, the effort is pretty high (we have to save the unique key of the picked-up records and update them again after either persisting the message or processing it) and we still have no 100% safety that the DB records get updated successfully after the message got persisted (or processed) on Cloud Integration.
So I stay with my opinion that the "JDBC Transaction" handling feature is not complete until it actually includes transactions for the JDBC adapter valid during the whole IFlow runtime.
I have created a customer influence request here: https://influence.sap.com/sap/ino/#/idea/285696