Skip to Content
Author's profile photo Abhijeet Ranjan

Integration of Third Party Scheduler (e.g. Control-M) with SAP PI/PO – Handling PI/PO intermediate message status in real time

In several productive scenarios, batch jobs are scheduled and tracked by third party schedulers. A scheduler is responsible for seamless and optimized scheduling and processing of jobs. Control-M (BMC) is one such scheduler that exists in various landscapes to schedule the jobs in PI/PO as well as ECC. The efficiency and throughput of Control-M (or any third party scheduler software) highly depends on the way it is integrated with components like SAP PI/PO, ECC etc. Also, in case of high volume scenarios, unforeseen event like PI/PO messages getting stuck in adapter engine (To be delivered -TBDL/Delivering -DLNG etc) do take place. This kind of situation affects the batch jobs execution (and completion) adversely. It might result into partial execution of PO/ECC jobs and then a lot of reprocessing effort is required both in PI/PO and ECC. The motive behind writing this blog is to consider such undesirable scenarios during third party scheduler (Control-M) integration with PI/PO and ECC.

Premises


1. Integration of Control-M with SAP PI/PO

2. Integration around File based scenarios (File to any-adapter)

Related Content on SCN

There is another valuable blog written by Deepak Shah which provides a file generation based approach of integration.

Integrating Scheduler Software with SAP PI

           

Why this blog then?

  1. The approach used in this blog can open another avenue for people who are looking for a different integration pattern esp. in very high volume scenarios. This solution uses real time message status handling based on PI/PO database.
  2. This solution is very helpful in File-to-IDoc scenarios where multiple IDocs are created from one file and the message volume is huge. By huge volumes, I mean the following:
      1. File count (sender side) can go up to 200-300 files of size 5Mb each making the total size as high as 1 – 1.5 Gb
      2. IDoc count (receiver side) can go up to 5000 to 50000 IDocs per file resulting in 500k IDocs in certain situations

    3. This solution provides mechanism to accommodate sender-side actions once messages are found stuck in intermediate statuses like

        ‘To Be Delivered (TBDL)’, ‘Delivering (DLNG) and ‘Waiting (WAIT)’.


System Details


SAP PO 7.31 Single Stack

(It should work on Dual Stack and other PI/PO versions given the PO Table fields for table BC_MSG remain same. Otherwise, minor adjustment might be required based on different PI/PO versions)

Prerequisites


SAP PO JDBC Driver should be deployed to establish JDBC Connection.

(Please note: JDBC connection to PO database will only involve SELECT query and no CRUD operations will be performed on the DB in any case, whatsoever. Any modification (CRUD) within SAP standard DB is a straight No-Go and is not advised in any case. )

Business Scenario:


Let’s say, a business requires automatic execution of jobs based on interdependence and success of related jobs. Control-M will have to execute both PI/PO and ECC Jobs to fulfill this requirement. By PI/PO Jobs, I mean starting and stopping of channels for various interfaces which in turn will pick the files from configured directories for processing. Subsequently, it has to start other ECC jobs based on successful completion of predecessor/related jobs. ECC Jobs can be batch jobs that are scheduled for batch processing.

Here, the dependency of jobs is crucial because certain jobs cannot be started unless predecessor jobs are successfully completed. The reason behind this is, predecessor jobs might be creating the input for successor jobs. If a job fails, then related jobs can be postponed based on business rules to make end-to-end execution successful.

High volume scenarios make it even important to have a perfect triggering of interdependent jobs in order to successfully complete end-to-end business processes.

Process Flow:

       A typical scenario (File to IDoc – Multi mapping) will include the following process chain:

  1. Control-M starts PO Job (ZPO_JOB1). This basically starts the sender channel.
  2. Files get picked up from source folders and processing gets completed in PI/PO. IDocs get delivered to ECC.
  3. Control-M starts related ECC job (ZECC_JOB1) based on defined fixed-timing and meanwhile, it stops the PO Job (ZPO_JOB1) which basically stops the sender channel.

         

         Visible Disadvantages in above case:

  1. Fixed time scheduling will put technical constraints on the business processes
  2. If PO job fails, then two possibilities exist:
    1. If PI/PO Alerts are configured: related ECC job can be stopped
    2. If PI/PO Alerts are not configured: ECC jobs will be executed irrespective of PI/PO Job failure
  3. Another possibility in case of such high volume scenarios is partial completion of PI/PO Jobs.

By ‘Partial Completion’, I am referring to scenarios where messages get stuck in Adapter Engine in To Be Delivered (TBDL), Delivering (DLNG), Waiting (WAIT) statuses. These situation are encountered due to several reasons e.g. Resource Failure, Receiver Bottleneck, System Crashes etc.

These intermediate messages are not captured by alerts because of their non-final status.

So, if the PO job does not return any error back to Control-M, then Control-M will trigger the ECC jobs and it will lead to partial execution in ECC because of partial delivery of messages (i.e. IDocs in File-to-IDoc scenarios) to ECC.

   b. Control-M job scheduling with another SOAP or HTTP_AAE call: (Solution Integrated)

       A typical scenario (File to IDoc – Multi mapping) along with the message status based solution will typically follow the below process chain:

  1. Control-M starts PO Job (ZPO_JOB1). This basically starts the sender channel.
  2. Files get picked up from source folders and processing gets completed in PI/PO. IDocs get delivered to ECC.
  3. Control-M triggers a synchronous SOAP or HTTP call (whichever it is capable of) with a defined Interface ID and some other details viz. Sender/Receiver Business Components, Service Interfaces (outbound and inbound), Interface Start Time, ICO Scenario ID (This is a unique identifier for an ICO and is totally capable of identifying and end-to-end interface execution. This can be integrated as a self-sufficient field and all other details can be avoided) etc.

             Internal Processing:

      • Control-M triggers SOAP/HTTP  call.
      • Input data is read by PO and a synchronous JDBC call is triggered on PO database on table BC_MSG.
      • The conditional mapping maps request data to a JDBC request structure TO get overall message count in TBDL/DLNG/WAIT/HOLD statuses.
      •   It can even be tweaked to fire 5-6 JDBC calls (Only SELECT for different statuses, if required)
      • JDBC response is appropriately mapped back to SOAP/HTTP response.

   4. Control-M starts related ECC job (ZECC_JOB1) based on the SOAP/HTTP response to identify the extent of interface completion (if required, in terms of percentage) and, it stops the PO Job (ZPO_JOB1) which basically stops the sender channel.

Build


This solution can be realized using two integration patterns, depending on Control-M (or any other third party) capability of firing the request:

  1. SOAP to JDBC – Synchronous (Preferred)
  2. HTTP_AAE to JDBC – Synchronous

A typical development procedure will include creation of the following:

  • ESR: DT/MT/SI/MM/OM
  • ID: Business Components, SCC, RCC, ICO, CS

(Details are not provided as the solution can be customized according to the requirement. The approach is something that is highlighted in the blog while the regular object build is not explained owing to its custom nature and varying requirements. Any help in designing will be entertained, if required.)

Testing


Once implemented, it can be tested with Control-M (or third party) integration. Alternatively, it can be tested from SOAP UI to verify the results.

The image below captures execution results in details (status wise). When fired, it will reflect the real time status of messages falling under respective status. The response statuses can be configured in message mapping based on the requirement.

Major advantages of the message status based approach

  1. It can save a lot of reprocessing (effort as well as time) and post-processing adjustments/reversal in case there is a failure/bottleneck in PO system.
  2. It can improve the performance of Control-M in terms of Jobs Execution. This solution can be leveraged to execute jobs independent of hard-coded wait time. After Control-M fires a status seeking request and the response contains no error (i.e. error count = 0), then subsequent related jobs can be triggered immediately. This can dramatically decrease fixed-time dependency as well as execution timing of batch job sets.
  3. It can provide a mechanism to quickly troubleshoot interfaces in production support and during maintenance activities.
  4. It can help in identifying PO system bottlenecks. If there are a lot of messages in TBDL/DLNG/WAIT/HOLD/FAIL, a quick system health checkup can be done and remedial steps can be taken. It will subsequently prevent automatic PO system restarts; heap dumps due to out of memory (OOM) issues in very high load scenarios.
  5. It can avoid message blacklisting which happens in case of a system restart during message delivery.

Conclusion


This solution provides one way of effective PO/ECC job integration and execution. Reprocessing and Post-processing adjustments/reversal are always tedious in case a link is found broken in batch jobs execution chain. It can be extrapolated that there are numerous possibilities from this point forward for the third party scheduler (e.g. Control-M) to take actions on such undesirable impediments. Based on the message status response, email alerts can be triggered and appropriate remedial steps can be taken.

Last but not the least; this was one way of integrating Control-M with PO/ECC jobs. There can be more ways to achieve it as well. Readers are requested to share different ways, if encountered, and their viewpoints around this approach.

Assigned Tags

      12 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      Hi Abhishek,

      Good work around to tightly integrate job scheduler with PI batch interfaces. :-)...

      Cheers,

      Rakesh

      Author's profile photo Abhijeet Ranjan
      Abhijeet Ranjan
      Blog Post Author

      Hi Rakesh,

      Thank you. In similar lines of tight integration, this approach can further be extrapolated and used wherever PI/PO message status is required e.g. quering PI/PO from an ABAP program to trigger another dependent ABAP program.

      Regards,

      Abhishek

      Author's profile photo Former Member
      Former Member

      Nice Blog Abhishek...

      Author's profile photo Abhijeet Ranjan
      Abhijeet Ranjan
      Blog Post Author

      Thank you Indrajit.

      Regards,

      Abhishek

      Author's profile photo Eng Swee Yeoh
      Eng Swee Yeoh

      Hi Abhishek

      Thanks for sharing such a detailed solution - nice one! 🙂

      Just wanted to share my two cents 😉

      By ‘Partial Completion’, I am referring to scenarios where messages get stuck in Adapter Engine in To Be Delivered (TBDL), Delivering (DLNG), Waiting (WAIT) statuses. These situation are encountered due to several reasons e.g. Resource Failure, Receiver Bottleneck, System Crashes etc.

      These intermediate messages are not captured by alerts because of their non-final status.

      While it is true that these intermediate status do not generate alerts, it is also possible to integrate this into CCMS. There is a PI-specific template that allows the monitoring of qRFCs, AE backlogs, etc. This can be configured to then trigger alerts to CCMS and subsequently to relevant parties. More details in the link below

      http://help.sap.com/saphelp_nwpi711/helpdata/en/90/4e313f8815d036e10000000a114084/frameset.htm

      Rgds

      Eng Swee

      Author's profile photo Abhijeet Ranjan
      Abhijeet Ranjan
      Blog Post Author

      Hi Eng,

      Feels good that you liked it.

      Besides, thanks a lot for providing your inputs on CCMS route. I went throught it and really appreciate it. To be more precise, this link - Current Number of Messages in Processing - Process Integration Monitoring - SAP Library - is more specific to the approach I arrived to.

      I do agree that AE backlogs can be captured there but it appears to me that it will give a 10000 feet view. Why I am saying this is, it will give you the AE backlog (based on components) but in scenarios where several interfaces are wrapped around single components, it will be cumbersome to identify the active message pertinent to any interface. If interface specific assessment cannot be done, then scheduler will be befuddled in triggering related batch jobs. Hope you get it.

      In case you've seen interface specific status in CCMS, please let me know. I would like to explore more in that case then.

      I've tried to keep the approach atomic to provide more flexibility to the scheduler.

      Regards,

      Abhishek

      Author's profile photo Eng Swee Yeoh
      Eng Swee Yeoh

      Hi Abishek

      Yes, I agree with you. The CCMS approach is only at sender/receiver service level and does not provide the granularity up to interface level (at least not that I am aware of!)

      At the end of the day, I think it all depends on the requirement of the organization. If the service level is sufficient for the organization, the CCMS route provides a standard approach for enhance the monitoring of the system health without additional investment/development. It provides a quick option of identifying the nasty TBDL/HOLD/WAIT backlogs that quite frequently occurs in the system.

      Rgds

      Eng Swee

      Author's profile photo Abhijeet Ranjan
      Abhijeet Ranjan
      Blog Post Author

      Hi Eng,

      You're right. Eventually, its the requirement that drives the course of solution. 🙂

      Thanks for your insight.

      Regards,

      Abhishek

      Author's profile photo Michael Johnson
      Michael Johnson

      Hi Eng,

      Is it possible to send alerts for in delivering/to- be delivered messages in a single stack PO 7.4 system?

      thanks

      mike

      Author's profile photo Former Member
      Former Member

      nice blog Abhishek!  Keep blogging

      Author's profile photo Abhijeet Ranjan
      Abhijeet Ranjan
      Blog Post Author

      Thank you Abhishek V.

      Regards,

      Abhishek

      Author's profile photo vimal pillai
      vimal pillai

      Excellent blog Abhishek!!!