Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos
We had a requirement of polling a database and post Idocs in the R/3 system. We were not able to post the IDoc in a batch as explained in another similar
weblog as the rows in the DB were to be updated back with Read status. The following explains in detail how we proceeded about it and the implementation.

Data Feed:
A stored procedure is executed by a sender JDBC channel.

The SP will return a resultset containing all the rows from the Vendor table. The structure of the delivered payload will be as below. Note that the Document Name
and Document Namespace in the communication channel matches with the messagetype name and messagetype namespace.

The message delivered by the adapter engine to the IE is shown below.

Message Processing Logic:
The requirement is to send multiple IDocs based on the data from the DB. There are 2 possible ways to do this.
1. Creating an external definition for the IDoc and making it repeating. (There are a couple of weblogs available on how to do this)
2. By creating multiple IDocs using multi-mapping features of XI (This is explained in this weblog)

We followed the multi-mapping way as we had the requirement of getting the IDoc transport acknowledgement and calling an update query.

BPM Design:
BPM is required here because we have to execute an update stored procedure and we are using multimapping
(From SP14 onwards multi-mapping is supported outside BPM also). All the transformations are done within the BPM
(This is a guideline which we follow where ever possible if a BPM is used. This makes the interface easier to understand and maintain)

The BPM and the container variables are shown below:



The first receive step in the BPM receives the resultset data from the outbound interface. The container variable used for the receive step is resultset.
The transformation step uses a multi-mapping and splits the resultset variables into 2 messages.

Mapping Program is shown below:

As you can see the messagetype is same on both the sides and mapping is one-to-one, but this does the magic of splitting one message with 2 nodes into 2 messages.
We are doing the splitting at this point as the IDoc does not have a mapping of vendor number and we need the vendor number for executing the stored procedure.
If the row to IDoc mapping is done outside the Block, we will not be able to trace back to which row created the IDoc as the IDoc doesn't have the primary key.
(This is a hypothetical situation to show the functionality as most of the time Vendor Number will be mapped to the IDoc)

After the first transformation the multiline container variable allrows will be populated. And now the block starts its action. The 2 independent messages
created will enter into the block independently and will get transformed to IDoc and then will be sent out from the BPM to the IDoc interface
(downloaded IDoc datatype). The abstract interface used inside the BPM for the IDoc equivalent will use the downloaded IDoc as the messagetype for that.

The first send step inside the block will send out the IDoc and will wait for the transport acknowledgement from the Adapter Engine. Once the
acknowledgement is received the second transformation which creates the update query is executed and then the send step will send it
out to the JDBC receiver, which executed a SP for updating the current row which is posted to R/3.

Usage of ForEach and ParForEach in the BPM Block:
This is one of the most important design considerations to be taken. This decides the performance as well as the resource utilization of the implementation.
Using ForEach will be less resource consuming and therefore safe from memory overflow issues, at the sametime it can be awfully slow in posting the IDocs as
the process is serially executed. i.e. One IDoc is posted, its update query is executed and then the next IDoc is posted.
Using ParForEach will improve the performance dramatically as all the sub-threads of the BPM executes in parallel. But this runs the risk of heavy resource utilization.

The decision has to be made on a scenario-to-scenario basis based on the number of expected IDocs, the tolerance in data-latency allowed by the business, etc.

The screen below depicts the effects:

1 Comment