Skip to Content
Technical Articles
Author's profile photo stephen xue

SAP PO Async Message Processing Analysis

In SAP PO, single java stack system, we have at least the following options controlling number of message processing threads:

  • Number of cluster nodes;
  • connectionDefinition
  • system.queueParallelism.perInterface
  • ReceiverParallelism
  • MessagePrioritization

Let’s take the following story line to elaborate the functionalities of all above options.

Company A has a SAP PO system in single java stack, which has a bellowing scenario.

PO retrieves order messages from a JMS queue Q1 by using JMS sender channel and pushes the order messages to system Target01’s API via a REST Receiver channel.

The PO server has one cluster node and all configurations are default.

It works fine.

1. Number of cluster nodes

One day, a requirement has come that a new similar scenario needs to be built as below.

Whereas the velocity that messages dropped off into the JMS queue Q2, is a bit fast from the upstream systems. Problem has been found that

Even though the maximum 5 threads of JMS Send queue (JMS_ have been used, there are still some backlogs in the Q2.

Take into account that the system SAP PO installed has enough number of CPUs and enough memory, to solve the problem above, the number of cluster nodes has been increased from 1 to 2(SAP KBA # 1734360).

Since the maximum number of threads picking up messages from JMS queue has been doubled, we have enough workers picking up the messages from queue and system is working properly now.

2. messaging.connectionDefinition

Company A’s business keeps growing. There are more and more order messages dropped into the Q1 and Q2. The time spent to receive an Ack. (normally it is a HTTP status) from Target01 perspective is a bit longer than JMS sender channel retrieving messages from Q1 and Q2. Little by little it results in a large backlog inside the dispatcher queue.

The symptom is that lots of the messages are in the ‘To Be Delivered’ status in the message monitor and it takes long time to put through a message.

On the other side, Target01 says that even though it takes long time to receive one message, whereas the system has lots of capacity and can accept multiple incoming calls in parallel.

Therefore we can increase the value of messaging.connectionDefinition


NWA–> Configuration–>Infrastructure–>Java Service Properties

Tab Services–>XPI Service: AF Core–>messaging.connectionDefinition

In this property, we can configure the number of Adapter specific queues. The default number is 5 per cluster node. Now we increase it to 20. Add the following string to the property and restart the J2EE service.

(name=JMS_, messageListener=localejbs/AFWListener, exceptionListener=localejbs/AFWListener, pollInterval=60000, pollAttempts=60, Send.maxConsumers=20, Recv.maxConsumers=5, Call.maxConsumers=5, Rqst.maxConsumers=5)

Since we have two cluster nodes now, altogether we have maximum 40 threads for all scenarios using JMS sender channels now.

After the configuration change, there are more threads picking up messages in the dispatcher queue. Even though the message process time is a bit slow on the Target01 side, with more threads (JMS send queue), the backlogs in dispatcher queue have been broomed.


3 messaging.system.queueParallelism.


As the business development of Company A, a new scenario has been add as below.

A new queue has been added to receive new master data (from upstream systems) and SAP PO synchronises the data to Target02. Since the size of a single master data is extremely huge, it took very long time to finish one call, say 1 minute something per one call.

Even though there are 3 scenarios in the system, soon we can find that all of 40 threads have been used up by scenario number 3. For scenario number 1 and 2, the Target01 is much faster than Target02, whereas there is no vacant threads (queue) assigned to process their messages. It still takes long time for these two scenarios to finish one process. Most of their messages are in status ‘To Be Delivered’ again.


NWA–>Configuration–>Infrastructure–>Java Service Properties

Tab Services–>XPI Service: Messaging System–>

  • system.queueParallelism.perInterface
  • system.queueParallelism.maxReceivers

To let the mechanism work, set property perInterface to true from the default value false. Then set value of property maxReceivers to 10. It means once configured, the maximum number of threads (adapter specific queue) per interface per cluster node is 10.

Remember to restart the J2EE service.

After restarting the service, we can see scenario number 3 has occupied 10 * 2 = 20 threads at most. Since the whole PO system has another 20 threads for scenario 1 and 2, these two scenarios will not have too many backlogs.

By using this configuration, the performance of one scenario will not impact the whole system.


4 ReceiverParallelism

A new scenario has been built up in Company A as below

Since Target03 is a server running a PC, it cannot accept too many connections in parallel. During the performance test, PO find lots of HTTP 500 issue while pushes the messages into Target03. The technical help of Target03 indicates that the maximum connections for Target03 is 4.

As per the configuration done for maxReceivers is 10, PO can generate maximum 20 connection to Target03 which is much larger than its capacity and it leads the HTTP 500 connection issue.


Configuration and Monitor Home–>Configuration and Administration–>Adapter Engine–>PI Receiver Parallelism

Note: This option will only work if the previous step has been configured. When access the view, you will see message as below

It means you can go on configure it.

If it is like this

It means you need to specify a maximum number of threads per interface over the system wide at first, then you can go on configure here to restrict number of connections for a specific interface.

After clicking the create button, input the Party/Component/Interface/Namespace information to the table below.

Remember, we have 2 system cluster nodes, the count of interface should be 2 here so that the maximum number connections will not excess 4.

The configuration will be working right after clicking the save button. There is no need to restart J2EE service.

5 MessagePrioritization

Company A has a new business. Message is supposed to be retrieved from a new JMS queue and pushed through as soon as possible for a better user experience. Whereas when scenario 1/2/3 have large amount of messages to be sent, say data synchronization process, messages of this new scenario will be delayed for a long time in the queue. In order to solve the problem, we need to let these messages cut line.


Configuration and Monitor HomeàConfiguration and AdministrationàAdapter EngineàMessage Prioritization

Configure a rule like below

Without this configuration, the priority of a scenario is Normal, which will be assigned 20% of system resource.

With Priority High, system will assign 75% resource to the scenario by default;

With Priority Low, system will assign 5% resource to the scenario by default;

No J2EE restart is needed for this configuration.

Since more resource has been assigned to the scenario, a message of this scenario in the dispatcher queue will have larger chance to be picked by the Adapter Specific Queue, by which means the message has cut in the line.



Asynchronous Message Process Sequence inside SAP PO

I cannot find any official document on how asynchronous message has been processed step by step inside SAP PO. Here I am trying to elaborate it according to my feeling. If I am incorrect, you are very welcome to point it out. Thanks

Step 1. When a message has been pushed to an endpoint exposed by SAP PO or polled by a channel like FILE/JDBC/JMS, it arrives in the adapter engine at first then be put into the ‘Adapter Specific Send Queue’. The name of queue is <Adapter Type>_

For examples

  • if the message is picked by a JMS channel, the queue name will be JMS_;
  • if the message is pushed to a RESTful Endpoint of SAP PO, the queue name will be REST_;

Note: if the message is coming from BPM, the queue is JPR_ JPR stands for Java Proxy Runtime.

Step2. A thread in the adapter specific send queue will put the message into the dispatcher queue;

Step3. A thread in the adapter specific send queue picks up the message from the dispatcher queue based on factors like number of free threads, messaging.connectionDefinition, messaging.system.queueParallelism.perInterface and prioritization etc.

Step 4. The thread puts the message to receiver determination defined in the ICO to which message belongs. After that a receiver component stamp will be sticked to the message and the message will be placed again into the dispatcher queue. Once done, the thread will be released and it will try to pick up a new message from the dispatcher queue. The red rectangle shows the lifecycle of the thead.

Step 5. A thread in the adapter specific send queue picks up the message from the dispatcher queue based on factors like number of of free threads, messaging.system.queueParallelism.perInterface, ReceiverParallelism and prioritization etc.

Step 6. The thread pushes the message to the mapping program.

Step 7. The thread pushes the target message to the receiver channel. Once the receiver channel has got the HTTP status (200, 202, 503 etc.) back from the receiver system, the thread will be released and it will try to pick up a new message from dispatcher queue. The green rectangle shows the lifecycle of the thread.




Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Abhishek Roy
      Abhishek Roy

      Hi Stephen,

      Awesome blog. Many doubts have been cleared.

      Author's profile photo Yunze Wang
      Yunze Wang

      Hi Stephen,

      Does the same solutions apply for Sync scenarios too?


      Author's profile photo stephen xue
      stephen xue
      Blog Post Author

      Hi Yunze,

      the queue name is different for sync. communication. please refer to this one:


      Author's profile photo Yunze Wang
      Yunze Wang

      Got it,THX