During one of latest projects I worked on, there was a need for sending data across multiple PI 7.1 systems. Main issue for this kind of landscape configuration are related to find and monitor many possible points of failure along complex data pipeline.
Dealing with complete monitored solution it’s often not so easy, so that’s the reason why I worked trying to solve the problem and share the solution with XI/PI Community Experts.
The goal is to handle messages sent from one or more SAP systems (section A) connected to one or more SAP systems (section D) through two PI systems (section B and C), belonging to different customer departments, providing data validation mechanism as well. Data should be processed on section D only if the amount of received messages for each independent flow is equal to what is sent from section A, in other words the messages have to be treated as bundle and maintained together. The solution should also prevent BPM adoption.
When you treat these scenarios, it’s really critical to foresee and manage all possible points of failure during data exchange through several systems.Consider IDocs or Proxy messages generated on section A stuck in many points for technical and/or functional issues! You will surely waste time and struggle any time with errors occurred in the pipeline.
An additional requirement is even to minimize interactions between PI systems as well as build a monitoring cockpit for the complete landscape which flows are safely and quickly monitored in.
This blog is only focused on Sender side solution (section A and B), with a brief description of Receiver side.
The solution offers the chance to handle both IDoc and ABAP proxy technologies on section A to send data out from your SAP systems, choosing Queued and Not Queued communication type according to specific business requirements. The idea is to let PI system of section B be responsible of starting, controlling and alerting activity of the flow up to edge with section C, where received messages should be compared with the amount of messages “declared” by PI system of section B. In section C, finally, a cockpit is developed to gather information and show the results.
The connection between PI system B and C is accomplished using XI protocol that connects directly Integration Engines bypassing Adapter Engine step, improving delivery performance and enabling automatically, for instance, acknowledgments propagation for each message when requested. Let’s have a look in deep to technical solution of each section.
2.1 SAP systems (Section A)
Proposed solution is actually adopted to handle ABAP Proxy scenario with queued communication and IDoc without queued communication but you can customize according to different scenarios. Concerning Proxy approach, it’s possible to maintain fixed or variable queue names with an additional prefix to better identify and treat flows (such as XBQSMATMAS*). Refers here for documentation about Proxy Runtime.
The steps to enable IDoc communication on section A are:
- Setup partner profile in order to collect IDocs on system from transaction WE20.
- Import easy Remote-enabled custom function module available here that selects IDocs for given Message Type, Partner Number of Receiver, Partner Number of Sender and Receiver port. Then the standard report RSEOUT00 is called to dispatch and finally return the amount of successfully processed IDocs.
For enabling Proxy communication:
- After writing your ABAP code for handling Outbound Proxy messages with queued communication, access transaction SMQR and deregister given queue name. In this way, the messages generated will stuck into Inbound Queue (transaction SMQ2), and they can be found on transaction SXMB_MONI in status “scheduled”.
2.2 PI system (Section B)
As stated before the idea is to give PI system (B) the control for checking system availability of SAP systems (A) and second PI (C) before transmission start, verifying former process flow run in the past, triggering message sending from A, updating transmission details on C and also providing monitoring feature as described here.
After a little bit of debug on ABAP side I found the standard function module to activate qRFC is TRFC_QIN_ACTIVATE that is called to trigger Proxy messages processing. There was even the need to write an easy function module named Z_IDOC_CONTROLM, to handle message activation only for IDocs on each source system (A) involved.
So I wrote the Report that executes below mentioned steps:
- Checks initially if sender and receiver system connection is working by use of function module RFC_WALK_THRU_TEST that also performs authorization checks (better than RFC_PING).
- Calls a remote function on PI system (C) that returns the result of previous run for a given flow (identified by Sending System, Outbound Interface Name, Receiving System, Inbound Interface Name). From a logical perspective the remote function indicates if there is a blocked or not yet “approved” flow. If there is a blocked flow the report will stop the execution raising an alert because this means an error occurred across the pipeline and probably former “declared” messages didn’t reach PI system (C).
- Only if the result of last function was successful and there are no blocked or not approved messages, the function module TRFC_QIN_ACTIVATE or Z_IDOC_CONTROLM is called remotely on SAP system (A), to start messages processing, returning number of messages successfully processed.
- Last step is to call a remote function on PI system (C) that updates a table with interface details (Sending System, Outbound Interface Name, Starting time, Starting Date, Number of messages processed, Receiving System, Inbound Interface Name) to be checked and approved.
For a deeper explanation about Report and to download ABAP code, click the link here.
After report creation you have to schedule a job for each interface involved in the scenario then filling the parameters to identify the flow together with a defined alert category to be raised in case of failure. Take in account to set a properly value to Max Queues Activation Time.
Proxy and IDoc Input Parameters
Receiver side of scenario is out of scope of this blog. From a functional perspective, a job on system (C) compares a table filled with flow details, to verify if “declared” number of messages is equal to the ones received and processed on system. If no errors occur a flag is set on a table meaning that new messages package can be received and processed. The results of the checks done are collected into a fast-view cockpit to accomplish a full monitored solution.
A successfully test scenario in shown below together with a fault case after which an alert is generated and a mail is forwarded to monitoring group.
Just to summarize, the steps needed to extend this approach to new flows are:
- Create a new entry into table of PI system (C).
- Deregister queue name (Proxy) or set partner profile (IDoc) on SAP system(A).
- Create a variant of Report ZBRGP_TRIGGER_ MSG into PI system(B) and schedule job according to business needs.
The scenario described is currently adopted from one of my customer and is currently handling Proxy and IDoc messages without facing any kind of problem, with monthly approx 85.000 messages exchanged.
Using Proxy approach with solution built here, there is also the chance to enable a fixed time process execution. As mentioned into SAP Documentation related to function TRFC_QIN_ACTIVATE (If you set the import parameter MAXTIME to a value not equal to 0 (0= unrestricted; the call is returned once the queue is empty), the qRFC Manager only activates the queue within the specified time. If the time runs out while the last LUW is being processed, this call is returned after the last LUW ), it’s possible to calculate end time for processing messages on PI. Surely, that’s not the best ideal solution since it’s never a good approach to play with time while dealing with distributed programming, but one of my customer needs is to send messages generated on section A, within a fixed time slot, not before not later.
This approach provides with few restrictions a fixed and phased delivery process.
A limitation of this approach is that the validating job on PI system(C), needs a minimum amount of time to update and approve all different flows; let’s define this time , tupdate = 2 min approx, that means also triggering job for each flow on PI system(B) must respect this time limitation. If ti is the starting time for flow named i, t MAXTIME is the value of Input parameter (Max Queues Activation Time) of the report then in order to avoid a job errod ti+1 > ti + tupdate + t MAXTIME.
That means to avoid errors on scheduled job, you have to schedule next job start for each independent flow, after previous job is finished + approving flow time on system C.
Finally, I wish to thank Sergio Cipolla for great given support.