Skip to Content

Motivation:

Most of the PI/PO adapters are running on Java. With PI 7.3 also the IDoc and HTTP adapter will be available for Java. Therefore for most of the scenarios Java only processing of messages is possible using Integrated Configuration objects (ICO).

For classical ABAP based scenarios the parallelization was mainly triggered by adjusting the parallelization of the ABAP queues in the Integration Engine. Now, since all the steps like mappings and backend calls are executed in the same queue the tuning of the Java based Messaging System queues is now essential. This blog tries to summarize the aspects that have to be considered when tuning the Messaging System queues.

The description is valid for all SAP PI releases. Newer versions have additional possibilities that will be explained where necessary.

Note: All this information will also be made available in the next version of the PI performance check via SAP Note 894509.

Prerequisite:

Wily Introscope is a prerequisite for the analysis discussed below. For more information on Wily please refer to SAP Note 797147 and http://service.sap.com/diagnostics.

Overview

Messaging System:
The task of the Messaging System is to persist and assign resources for messages processed on the Java stack.

Queues in the Messaging System
The queues in the Messaging System behave different then the qRFC queues in ABAP. The ABAP queues are strictly First In First Out (FIFO) queues and are only processed by one Dialog work process at a time. The Messaging System queues have a configurable amount of so called consumer threads that process messages from the queue. The default number of consumer threads is 5 per adapter (JDBC, File, JMS, …), per direction (inbound and outbound) and Quality of Service (asynchronous (EO, EOIO) and synchronous). Hence in case a message is taking very long other messages that arrived later can finish processing earlier. Therefore Messaging System queues are not strictly FIFO.

There are four queues per adapter. All the queues are named <Adapter>_http://sap.com/xi/XI/System<Queue_Name>.
<Queue_Name> here stands for:

  • Send (Asynchronous outbound),
  • Rcv (Asynchronous Inbound),
  • Call (Synchronous outbound) or
  • Rqst (Synchronous inbound).

An example for the JMS asynchronous outbound queue is JMS_http://sap.com/xi/XI/SystemSend.

The Messaging System is running on every server node. Therefore the maximum number of parallel connections to a backend system can be calculated by multiplying the number of server nodes by the configured number of consumer threads. In case you have 6 server nodes with the default 5 consumer threads for an adapter, you will have at most 30 parallel connections to the receiving backend.

Difference between classical scenarios and Integrated Configuration (ICO)
As stated earlier with 7.1 and higher more and more scenarios can be configured as Integrated Configuration (ICO). This means the message processing is solely done in Java and a message will not be passed to the ABAP stack any longer.

In a classical ABAP based scenarios a message transferred e.g. from JDBC to the JMS adapter will pass the Adapter Engine twice and will also be processed on the ABAP IE. The ABAP IE is responsible for routing and calling the mapping runtime. This is shown below:Classical Message Flow

In an Integrated Configuration all the steps (routing, mapping and modules) are processed in the Java stack only: ICO message flow

Looking at the details of Messaging System processing we can see that every message will pass through the dispatcher queue first. The dispatcher queue is responsible for the prioritization of messages and will pass the message further to the adapter specific queues once consumer threads for the specific adapter queue are available. Therefore if you see a backlog on the dispatcher queue it indicates a resource shortage on one of the adapter specific queues. The message flow on the PI AFW outbound side for a classical (ABAP based) scenario is shown below:
Details Classical Message Flow

In a classical scenario a backlog will be most often seen on the receiver (inbound) side. The reason for this are the tasks executed in the Messaging System queue. The consumer threads on the sender (outbound) side only have to read the message from the database and forward the message to the Integration Engine. Since this are purely PI internal tasks this is usually very fast. However on the receiver (inbound) side the consumer threads will also be responsible for the Adapter module processing and the connection to the backend. This is more complex and external factors like network bandwidth or performance of the receiving system play a major role here.

In a Java only (ICO) scenario all the processing steps in the Messaging System are executed in the sender queue. The receiver queues are not used at all. Additionally to the above mentioned steps the consumer threads are now also responsible for routing and mapping. All these steps are executed by the sender consumer threads and therefore more time is required per message and a backlog is more likely to occur.

Analyzing a Messaging System backlog:
In the Messaging System a backlog can be seen when there are many messages in status “To be Delivered”. Other messages like “HOLD” status do not represent a backlog on queue level.

In general backlogs in the Messaging System can be caused in the following cases:

  1. Mass volume interfaces:
    An interface that is triggered in a batch can create many messages that will queue within PI. For such interface it might be necessary that a backlog builds up in PI to protect the receiving backend from overloading. Therefore in such a case it is not the purpose of tuning the PI to pretend the backlog but to ensure that other runtime critical interfaces are not blocked by that backlog.
  2. Slow performance of interface:
    A long processing time for a single message can be another reason for a backlog In the classical ABAP based scenario this usually only happens on the Receiver Adapter. Reason for that could be a long runtime in the Adapter Module or in the receiving adapter. This can happen for all type of adapters but is especially critical for JDBC/FTP and EDI adapters connecting via slow WAN connections (like OFTP via ISDN).
    In the Java-only scenario (ICO) in addition to the above also the mapping execution is now triggered by the Messaging System. A long running mapping (e.g. due to large messages or complex mapping logic) can therefore also cause a backlog for such interfaces. In such a case the bad performance of e.g. the INSERT statements to the remote DB has to be analyzed and improved. This will not be discussed in more detailed here. Please refer to the PI Performance Check for more information.

If further tuning is not possible also here the task is to avoid any blocking situations with other interfaces and avoid an overloading of the system.Based on the above description the aim of tuning of the Messaging System queues is to reduce the PI internal backlog but also ensure that connected backend systems can handle the messages received by PI.

In general backlogs on queues can be recognized using the Engine Status (RWB -> Component Monitoring -> Adapter Engine -> Engine Status) information as shown below.
RWB Queue Overview

You can see here the number of messages in the queue and also the threads available and currently in use. This page only represents a snapshot for one server node. It is not possible to look at historical data or verify the thread usage on several server nodes simultaneously.

Therefore Wily Introscope is highly recommended. In the PI Triage dashboard you can see all the information in one screen and also analysis of historical data (per default 30 days) is possible. For efficient tuning of your system Wily is therefore mandatory.

Wily PI Triage

Parameters relevant for MS Queue tuning

Only a couple of parameters are available to tune the queue processing in the Messaging System:

1. Configure consumer threads per adapter:
As explained above all messages are processed using consumer threads that work on adapter specific MS queues. The number of consumer threads can be configured to increase the throughput per adapter. The adapter specific queues in the messaging system have to be configured in the NWA using service “XPI Service: AF Core” and property “messaging.connectionDefinition“. The default values for the sending and receiving consumer threads are set as follows:
(name=global,messageListener=localejbs/AFWListener,exceptionListener=localejbs/AFWListener,pollInterval=60000,pollAttempts=60,Send.maxConsumers=5, Recv.maxConsumers=5,Call.maxConsumers=5,Rqst.maxConsumers=5).
To set individual values for a specific adapter type, you have to add a new property set to the default set with the name of the respective adapter type, for example:
(name=JMS_http://sap.com/xi/XI/System, messageListener=localejbs/AFWListener, exceptionListener=localejbs/AFWListener, pollInterval=60000, pollAttempts=60, Send.maxConsumers=7, Recv.maxConsumers=7, Call.maxConsumers=7, Rqst.maxConsumers=7).
Note that you must not change parameters such as pollInterval and pollAttempts. For more details, see SAP Note 791655 – Documentation of the XI Messaging System Service Properties.Not all adapters use the above parameter. Some special adapters like CIDX, RNIF, or Java Proxy can be changed by using the service “XPI Service: Messaging System” and property messaging.connectionParams by adding the relevant lines for the adapter in question as described above.


2. Restrict the number of threads per interface:

  • Restrict the number of threads for classical (dual-stack) interfaces)  
    As stated earlier in such a situation per default all worker threads could be used with messages waiting for the backend system to finish processing. Thus, this interface can block all other interfaces from the same adapter type, who would rely on the same group of worker threads.
    To overcome this, parameter queueParallelism.maxReceivers can be used as described in SAP Note 1136790. This allows for classical (dual-stack) based scenarios to restrict the number of worker threads per interface (based on receiver Party/Service and Interface/Namespace information). By using this parameter you can ensure that even during backlog situations resources are kept free for other interfaces. This parameter is a global parameter – meaning it applies for the receive queue for all adapters. SAP highly recommends to set this parameter  for all customers running high volume and high business critical interfaces at the same time in their PI system.
  • Restrict consumer threads for ICO interfaces:
    Java only (ICO) scenarios do not use the Messaging System Receive queues. All steps are performed in the Send queues. Thus, the maxReceivers parameter is not applicable.
    But also in ICO scenarios a separation is necessary since the same problems (long running mapping or slow backend) can occur. For this SAP Note 1493502 introduces property “messaging.system.queueParallelism.queueTypes”. By setting the value “Recv, IcoAsync” you ensure that maxReceivers is used for ABAP and Java based scenarios.
    For finding the right value the same rules as above apply.
  • Enhancement to allow configuration of parallel threads per interface:
    With 7.31 SPS11 and 7.4 SPS6 (Note 1916598 – *NF* Receiver Parallelism per Interface) an important enhancement was introduced that allows the specification of the maximum parallelization not just globally but on a more granular level. This new feature has to be activated by setting the parameter messaging.system.queueParallelism.perInterface in service MESSAGING to true.
    Using a configuration UI you can specify rules to determine the parallelization for one or all interfaces of a given receiver service. If no rule for a given interface is specified the global maxReceivers value will be considered. A potential use case could be to restrict the parallel calls to a receiver system to avoid overloading the same. If the receiver system corresponds to a technical business system only the receiver service would be entered and the interface and namespace would be “*”. This means that across protocols (e.g. IDoc_AAE, Proxy and RFC) the parallelization would be limited by the value specified in this rule. Below you can find a screenshot of the configuration UI in NWA -> SOA -> Monitoring.
    21-12-2015 13-53-29.jpg
    With the improvement mentioned above, also the dispatching mechanism in the dispatcher queue is changed so that it is aware of the maxReceiver settings. This means that now again the backlog will now be placed in the dispatcher queue and the prioritization will work properly.


4. Adapt Concurrency of Communication Channel:
Many receiver adapters like File, JDBC or Mail work per default sequential per server node. They use a synchronization barrier as shown below to avoid overloading of a backend system:
Sync Barrier for MaxConcurrency

Since they only process one message per server node at a given point in time you should not dedicate too many consumer threads to it. As an example it makes no sense to simply configure 20 worker threads without setting maxReceivers.

To increase the throughput in such a case you also have to adjust the parallelism of the communication channel. For File and JDBC adapter this can be increased using the “Maximum Concurrency” value shown in the screenshot below. By doing this you will ensure that all worker threads that are allocated per interface are really processing messages. Of course the degree of parallelism highly depends on the resources available at the backend system. Setting “Maximum Concurrency” higher then MaxReceivers makes of course no sense.

5. Tuning the MS step by step
Via Wily it’s pretty easy to recognize a backlog. The screenshot below shows a situation where on both available server nodes most of the available worker threads are used for around 15 minutes. In case another runtime critical interface would send messages at the same time they would be blocked.
Wily Worker Thread Shortage

The screenshot below shows a backlog in the Dispatcher Queue. As discussed above messages are only loaded to the Adapter specific queues in case there are free resources available. Thus, a backlog in the Dispatcher Queue points to a bottleneck in resources for one specific adapter. This can be seen easily in Wily.
Wily Dispatcher Queue

In case you notice such a backlog you should increase the number of consumer threads for the identified queue as described above.


6. Tuning the number of threads used per interface (classical ABAP based scenario):
Tuning of the Messaging System has to be done carefully since it has a direct impact on the resources required in PI and the connected backend. Often PI is a very powerful system and can overload connected backend systems easily.

The Wily screenshot below shows such an example. It is a batch triggered interface that sends in a very short timeframe a lot of messages that are balanced across all server nodes and then processed for a period of more than 20 minutes.
Wily Inbound Queue backlog

As stated earlier in such a situation per default all worker threads could be used with messages waiting for the backend system to finish processing. Thus, this interface can block all other interfaces from the same adapter type, who would rely on the same group of worker threads.

To overcome this restrict the number of consumer threads per interface using the parameter queueParallelism.maxReceivers. Looking at the below example we can see that the system has 40 worker threads configured (right graphic) but only 5 of them are used for processing the backlog. The reason for this is that queueParallelism.maxReceivers is set to 5. Thus, the remaining 35 consumer threads are available for other interfaces and no blocking situation would occur.

Wily backlog & consumer threads

There is a specific Wily dashboard showing the current usage of maxReceivers per interface
MS_Queues_Wily-MaxReceivers.png

To report this post you need to login first.

57 Comments

You must be Logged on to comment or reply to a post.

  1. Former Member
    Hi,

    The blog is awesome. especially it clears the confusion in setting maxReceivers when we have high volume Sequential and Parallel adapters.

    Can you kindly explain in detail how max concurrency helps. I understand that the JDBC channel can acquire that many parallel DB connections, still the JDBC Adapter is going to process only one message at a time on a particular Server node.

    Many thanks.
    Sudharshan N A

    (0) 
    1. Former Member
      Sorry. Read the blog once again.

      Does it mean that if we set the max concurrency say to 3 for a particular channel then 3 worker threads will be allowed to process the actual message (3 messages on a server node) and connect to remote db parallely.

      Thanks.

      (0) 
      1. Mike Sibler
        Post author
        Hi Sudharshan,

        setting the max. concurrency to 3 for a receiver Communication Channel will allow the parallel processing of three messages per server node.

        In case you have 4 server nodes on your system you will therefore have 12 connections to the remote DB for this channel only. Therefore you have to ensure that by increasing the Max. Concurrency the remote DB will not be overloaded. 

        Hope this explains your question.

        Best regards,
        Mike

        (0) 
        1. Former Member
          Hi Mike,

          Thanks for the clarification.

          If we set the max concurrency in a channel greater than max Receivers I hope that the parallelism is limited by max Receivers. Is my understanding right.

          Best Regards,
          Sudharshan N A

          (0) 
          1. Mike Sibler
            Post author
            Hi Sudharshan,

            this is right. MaxReceives is the more restrictive setting here. Thus it makes no sense to have max Concurrency > max Receivers and I would not configure it this way.

            Regards,
            Mike

            (0) 
    2. Former Member
      Sorry. Read the blog once again.

      Does it mean that if we set the max concurrency say to 3 for a particular channel then 3 worker threads will be allowed to process the actual message (3 messages on a server node) and connect to remote db parallely.

      Thanks.

      (0) 
  2. Sai Ganesh
    Hallo Mike,

    The explanation is simply superb. It gave a clear picture of what happens in the Messaging system and how to fine tune it. Looking ahead for some more fine tuning & trouble shooting blogs 4m u 🙂

    Br,
    SaiGanesh

    (0) 
  3. Former Member
    Hello Mike,

    Good work, i’ve been concerned with adjusting these settings quite few times and it was a long try and error work + question and answer to SAP to figure it out and let it work efficiently.

    For some adapter like synchronous SOAP senders even other factors come into play like available FCA Threads and astonishly Http Session counts/lifetimes.

    Its good to see more official info coming out.

    With best regards
               Sebastian

    (0) 
  4. Mariah Huang
    Nice to have one comprehensive article to describe the PI messaging mechanism as well as present vivid usage scenarios. Keep up the good work and look forward to more terrific works from you 🙂

    Mariah

    (0) 
  5. Rahul Thunoli
    Hi ,

    In one of the paragraphs there is a sentence  –

    ” However on the outbound side the consumer threads will also be responsible for the Adapter module processing and the connection to the backend. “

    Didn’t you mean the inbound side ?

    (0) 
  6. Former Member
    Hi,

    Is there a maximum number of threads that can be used in total? We have set the global parameter to 15 but we are curious if there is a maximum number of threads one server node can handle?

    Like to hear from you.

    grtz Fons

    (0) 
    1. Mike Sibler
      Post author
      Hi Fons,

      actually there is no theoratical limit. But of course there are limiting factors on the J2EE side. E.g. per default a server node can have max. 100 DB connections. Since each consumer threads consumes a DB connection that is one limit.

      As usual such a configuration heavily depends on your scenario and the adpaters being used. Not all adapters can use all the configured threads anyway as discussed in my blog. For parallel adapters like SOAP or JMS we configured at several bigger installations for dedicated adapters (not global) up to 40 threads. But if they are all being used up it might be a better idea to scale via additional server nodes instead of increasing the number of consumer threads further.

      Regards,
      Mike

      (0) 
  7. Mike Sibler
    Post author
    Dear Raja,

    with the information above i can not really judge what is going on. You should verify the thread usage using Wily Introscope. In JDBC often the problem is not caused by PI but by the connection to the backend (e.g. due to serialization or bad performance).

    I would recommend to take thread dumps when the problem happens again, provide access to Wily and open a customer message at component SV-BO-XI.

    Regards,
    Mike

    (0) 
  8. Raja Sekhar Reddy

    i have added BELOW custom property for JDBC Adapter in NWA,After parameter change the XPI Service: AF Core / XPI adapter is not running.

    (name=JDBC_http://sap.com/xi/XI/System, messageListener=localejbs/AFWListener, exceptionListener=localejbs/AFWListener, pollInterval=60000, pollAttempts=60, Send.maxConsumers=5, Recv.maxConsumers=10, Call.maxConsumers=5, Rqst.maxConsumers=5) .

    its really strange, anything wrong in property value?

    (0) 
      1. Raja Sekhar Reddy
        Hi Mike,
        as per document
        Configured 20 worker threads for JDBC and set maxReceivers=3,“Maximum Concurrency”  value to 3 in the JDBC channel but i my case performance not imporved.

        i am getting below error

        Message processing failed. Cause:
        Channel has reached maximum concurrency (5,000 concurrent messages) and no free resource found within 5,000 milliseconds; increase the maximum concurrency level.

        any pointers?
        Regards,
        Raj

        (0) 
        1. Mike Sibler
          Post author
          Hi Raj,

          this is described in Note 1136474. In you case I think the channel is shared between different interfaces (otherwise you could not have 5 parallel messages with maxReceivers set to 3).

          But usually this indicates that you have a very slow backend because otherwise the timeout should not occur.

          As a workaround you can increase the parameter poolWaitingTime to avoid this error (but usually it is better to look into the backend performance first). If the channel is shared between interfaces you could also think about a separation to avoid impact between interfaces.

          Regards,
          Mike

          (0) 
  9. Dominique Remy

    Hello,

    Does somebody could tell me if what I read from the nice blog is correct ?

    I red : Thus it makes no sense to have max Concurrency > max Receivers .

    for my understanding the “Max Concurrency” is set at the channel level but where to set the “Max Receivers” ????

    Is the parameter : “messaging.system.queueParallelism.maxReceivers” ???

    If yes then if we have set “Max Concurrency” for JDBC adpater to 20 that the parameter “messaging.system.queueParallelism.maxReceivers” should be equal or greater than the “Max Concurrency” parameter ?

    So in our case we should set the parameter “messaging.system.queueParallelism.maxReceivers” to 20 or higher ?

    Is that correct ?

    Thanks in advance for this calrification

    (0) 
    1. Former Member

      Hello Dominique!

      The queueParallelism.maxReceivers parameter will limit the number of threads allocated for a single channel on the adapter engine. For example, you have configured 20 threads on the JDBC_ template for messaging.connectionDefinition parameter, but you don’t want a single channel (consider you have more than one receiver JDBC channel) to use all the resources alone, so you use the maxReceivers parameter.

      So: messaging.connectionDefinition : Total number of threads available for an adapter type/direction; 

      Max Concurrency (on the channel) : Number of database connections allowed by this communication channel to the database. This shouldn’t be lower than the allowed number of threads to handle this channel in parallel;

      queueParallelism.maxReceivers : up to how many threads available for the adapter type/direction a single communication channel can use.

      Hope it help!

      BR,

      Lucas Santos

      (0) 
      1. Dominique Remy

        Hello Lucas,

        First of all thanks for your answer 🙂

        So to be sure :

        1. Max Concurrency (on the channel) = 20

        2. messaging.connectionDefinition =

        (name=global, messageListener=localejbs/AFWListener, exceptionListener=localejbs/AFWListener, pollInterval=60000, pollAttempts=60, Send.maxConsumers=5, Recv.maxConsumers=5, Call.maxConsumers=5, Rqst.maxConsumers=5)(name=JDBC_http://sap.com/xi/XI/System, messageListener=localejbs/AFWListener, exceptionListener=localejbs/AFWListener, pollInterval=60000, pollAttempts=60, Send.maxConsumers=20, Recv.maxConsumers=20, Call.maxConsumers=5, Rqst.maxConsumers=5)

        3. messaging.system.queueParallelism.maxReceivers = 10

        Is that correct ?

        or should I set the “Max Concurrency” parameter to be lower than the “messaging.system.queueParallelism.maxReceivers” parameter ?

        Thanks in advance for this clarification

        (0) 
        1. Former Member

          Hello Dominique!

          Considering a single server node architecture, the Max Concurrency won’t reach because you limit the number of parallel processing for that interface (channel) to 10. So, you can lower it to 10 as well, or increase the maxReceivers (not recommended if you have more than one receiver JDBC channel).

          BR,

          Lucas Santos

          (0) 
            1. Former Member

              Hello!

              Not much, but then you’d have up to 20 parallel connections to the database instead of 10 (Max Concurrency) to consider in the maths.

              Max. Concurrency will determine the amount of database connections per server node. To check if the resources are correct, an easy way is to check the Engine Status (RWB -> Component monitoring -> Adapter Engine -> Engine Status). This should give a clearer overview of the threads usage, to help setting the Max. Concurrency parameter.

              BR,

              Lucas Santos

              (0) 
  10. Eng Swee Yeoh

    Hi Mike

    Thanks for such a detailed blog. It is very useful towards understanding the performance aspect of PI. I made a reference to your blog in one of my post, hope you don’t mind 🙂

    Thanks

    Eng Swee

    (0) 
  11. Chris Mills

    Thanks for the detailed article Mike, really good to have the queue comparison between dual and single stack and now makes more sense why we have seen slow mappings causing a entire system backlog on a single stack system.

    Cheers

    Chris

    (0) 
  12. Former Member

    Hi,

    Amazing blog. Congratulations! I have a doubt and it would be nice if I could have some light.

    I had assumed that in AEX installation all the messaging system pipeline steps (XML Inbound validation, Receiver Determination, Interface Determination, Mapping, XML Outbound validation) were carried out by the sender adapter specific queues threads but doing some tests I’ve detected the following.


    In order to understand correctly the scenario I’ve created a SOAP to FILE scenario where the Message Mapping takes 5  minutes (wait in a UDF) and I’ve created an enhanced message mapping to decide the receiver so I can add to the log the moment where the Receiver Determination step is carried out.


    I’ve used a java only installation (AEX), with just one java node, where the message is persisted after Receiver Determination and after Message Mapping.

    /wp-content/uploads/2015/10/1_809012.png


    Let’s look at the scenario at runtime:


    5 messages are sent via a SOAP UI client. Given that just one node is available and the number of threads for the SOAP Send Queue is set to 5 the 5 messages remain in delivering doing the mapping.


    /wp-content/uploads/2015/10/2_809013.png

    Dispatcher queue is empty and SOAP Send queue is using all the available threads so no further messages are able to be processed at this time.


    /wp-content/uploads/2015/10/3_809015.png


    Two additional messages are sent and given there are no available threads the messages are saved in status to be delivered.


    /wp-content/uploads/2015/10/4_809017.png


    The 2 incoming messages remain in the dispatcher queue and the SOAP send queue is still processing the first 5 messages.


    /wp-content/uploads/2015/10/5_809019.png

    Until here nothing to highlight given it is the expected behaviour.


    But looking at the log of the last two messages that are still in the dispatcher queue we can see that the first two steps of the pipeline processing has been carried out (Inbound XML Validation, Receiver Determination).


    /wp-content/uploads/2015/10/6_809021.png


    How is that possible if there were not threads available and the messages are still in the dispatcher queue? Does that mean that some of the steps of the pipeline processing are carried out by the dispatcher queue?


    Many thanks for your feedback!

    (0) 
    1. Mike Sibler
      Post author

      Hi Phileas,

      your observations are fully correct and the backlog in the SOAP and dispatcher queue can be explained with the explanation provided in my initial blog. It shows nicely how easy it is to block all threads with a single interface. That is exactly the reason why we introduced the maxReceiver parameter!

      You are also right that with the default staging configurations a couple of steps – namely the XML validation and the receiver determination – are done before the message is persisted in the Messaging System queues. It is not the dispatcher thread which executes the actions but the adapter thread itself. So in your case the SOAP sender thread also carries out the XML validation (if configured) and the receiver determination.

      This behavior can be configured with the staging configuration. I would recommend you read my SAP Note 1760915 – FAQ: Staging and Logging in PI 7.3 and higher. In your example above I would also not recommend to set staging after mapping as you have done it. This will mean that each message has to pass twice through the dispatcher and the SOAP queue. Custom staging settings should only be used carefully as described in the Note.

      Regards,

      Mike

      (0) 
      1. Former Member

        Hi Mike,

        Many thanks for your quick response. Again, awesome explanation.

        Could we then state that:

        Pipeline Processing

        1 to 1 scenarios: The adapter thread will carry out the pipeline steps until the message is persisted (based on staging configuration/logging configuration).

        1 to N scnarios: The adapter thread will carry out the pipeline steps until the message is persisted  (based on staging configuration/logging configuration) or after the receiver determination is carried out and more than one receiver is detected, moment where N messages will be put in the dispatcher queue (1 for each receiver).

        Dispatcher Queue

        For each staging step a context switch in the processing takes place. This means a new message version is persisted, the message is put into the Dispatcher Queue again and will be processed by another Messaging System consumer thread once a free thread is available.

        If logging is configured no such context switch will occur. The thread will persist the message and continue processing.

        Many thanks for your input.

        Your explanations in this blog are essential to correctly understand and set up correctly the message processing in java scenarios.

        Kind regards.

        (0) 
        1. Former Member

          Hello Phileas, Mike

          Looking at last 2 comments – it gives exact info going on behind the scenes.

          Appreciate your effort – as it really gives clear picture and alinged with Mark 🙂 .

          Thank you both of you for sharing this.

          –Divyesh

          (0) 
  13. Former Member

    Hi all,

    Just another question this time regarding to how to analyse the data in Willy Introscope.

    We have a lanscape (AEX) with thousand of messages running every day but Asynchronous Inbound Queue Sizes and Dispatcher Queue Sizes in Willy Introscope are always 0.

    Introscope queues.png

    Does that mean that Willy just show the messages are just put in the queues if the max number of threads is reached and the messages needs to wait to be assigned to the next available thread? Otherwise I assume that at list one message shold be displayed in Willy queues sizes.

    Many thanks!

    (0) 
    1. Mike Sibler
      Post author

      Hi Phileas,

      sorry for my late reply. The dashboard above shows the “Queue Size”. These are all messages in backlog – meaning in TBDL status. Since you do do not have any entries in there I assume their was no backlog.

      What I can not see in your screenshot is the resolution (time window). I always use the Minimum and Maximum functions to see details here. Just right click on the dashboard and choose “Show Minimum and Maximum”. This way you will also see peak situations.

      If you want to see details about processed messages you have to navigate to the detailed screen (by double-clicking on the arrow on the right side of the dashboard).

      Regards,
      Mike

      (0) 
  14. Former Member

    Hello Mike,

    Thanks for the informative blog. We are planning to set the property Max Receiver Parameter for ICO “queueParallelism.maxReceivers” as per note 1493502 in our landscape as we are facing some issues.We are having around 6 server nodes for production in our landscape. I have some queries if you have some idea :

    1. The current setting is set to default 0. What value we should set to avoid hanging of queues? Any suggested value?

    2. The value that we set will be applicable per server node or the entire system? For e.g. if we have a total of 30 worker threads(5 per node * 6 nodes) then if we set the parameter value to for e.g. 5 , then at run time 5 threads will be occupied per interface and 25 will be free for others or (5-5) zero threads will be free for others?

    3. I want to set the parameter value of

    messaging.system.queueParallelism.queueTypes = Recv, IcoAsync. The current value for this is default ” “. So default will behave as Recv, IcoAsync or i need to add Recv, IcoAsync manually in the property?

    Thanks.

    (0) 
    1. Mike Sibler
      Post author

      Hi Ranjan,

      let me try to answer your questions.

      1) Right value for maxReceivers:

      As explained above this is not always easy to determine. It mainly depends on the volume and the adapters you are using. Until recently this was a global setting affecting all receiver interfaces. But as outlined in the updated section 3 above there is an enhancement with Note 1916598 so that you can configure on more granular level.

      In general you always have to think about the maximum parallelization you want to have on the backend (e.g. due to limited Dialog work processes on ERP) and then determine the right parameters based on the number of server nodes you have.

      2) Value valid per server node or per system:
      All settings here are per erver node. So in your case with a setting of 5 threads and 6 SN you will have 30 parallel threads for that interface at most.

      3) Default value for queueTypes:

      If you want to restrict for asynchronous only no need to change the settings of queueTypes parameter.

      Regards,
      Mike

      (0) 
      1. Former Member

        Thanks for reply Mike,Appreciate if you can throw some light on below related queries:

        1. Is the parameter applicable to IDOC_AAE senders also i.e if my scenario is from SAP -third party(Idoc_AAE to third party) then changing the value from 0 to any other value will invoke parallelization in interfaces having Idoc sender channels as well? We are on SP06 PI 7.31 single stack ICO.

        2. The number of worker threads for Idoc_send are defined in XPI Service: AF Core service or Inbound_RA?

        3. Having only one Idoc_sender channel for multiple interfaces impacts the performance or not? Is it a good idea to create different CC for every interface having Idoc_sender adapter?


        I am having these queries as we recently faced performance issue in prod. The detailed description of which is :


        Performance Issue – Idoc_Sender messages in scheduled status


        Thanks.

        (0) 
        1. Mike Sibler
          Post author

          Hi Gaurav,

          you have to differentiate here the sender IDoc adapter processing and the IDOC_AAE queue processing here. Reading the description of your performance problem this is not caused by the sender IDoc adapter but the by a slow message processing in either mapping or (more likely) receiver system. The backlog of course happens on the IDoc_AAE queues and all the tunings mentioned above (increase consumer threads, set maxReceivers to avoid blocking of other interfaces, maxConcurrency) are therefore valid. But if the backlog is for one critical interface you have to check the reason for the slow processing.

          IDoc_AAE tuning is only done if you have a problem that IDocs are not received quickly enough by PI and you see a high backlog in SM58 (SMQ2 for serialized IDocs) on the sender side. But typically this is not an issue. IDoc_AAE sender tuning mainly happens on the inbound resource adapter (RA). There you can increase the number of reader threads listening on the Gateway. It is potentially possible to have one inbound RA per sender system. But this is typically not required and we therefore recommend to only use the default inbound RA. Also we typically  recommend to only have one sender IDOC_AAE channel per system. You might have multiple channels depending on channel settings (e.g. control records from payload, packaging settings or similar). More information about tuning the IDOC_AAE adapter you can find at Note 1641541

          Regards,

          Mike

          (0) 
          1. Former Member

            Thanks Mike ,

            In our case the problem was not with receiving the idocs in PI from ECC. The Idocs were coming to PI without any issues and there was no backlog in SM58.There was backlog in dispatcher queue in PI though.

            The problem started when the heavy load idoc message interfaces impacted other Idoc interfaces to process.

            At a particular time only 2-3 interfaces(Mostly Idoc-SOAP) were getting processed and other 7 interface messages(Mostly Idoc-File) were lying in TBD status. These were not procesed until the hevay load messages were cleared or we forcefully reprocessed some of them manually.

            I am not too much concerned with the performance of messages as we very much understand that message procesing will be delayed if heavy load is there.

            My entire concern was the impact on other interfaces using the same Idoc_Sender channel. Some of them were business critical interfaces and hence the issue was escalated.

            So for this concern what will be your final conculsion??

            increase Inbound_RA threads?

            set max receivers?

            Further the idea of having different Comunication channel for each interface was given in the SAP performance document to help improve paralleism(section 6.1.1 page 51)

            Appreciate all your help and thanks.

            (0) 
            1. Mike Sibler
              Post author

              In this case maxReceivers is essential to avoid that one slow interface can block all others. When setting this you often have to increase the default value. E.g. if you set maxReceivers to 5 it does not make sense to have only 5 consumer threads. You should then increase them to e.g. 20 or higher to avoid that multiple interfaces to one slow backend can block all threads again.

              Please also look at the new setting for maxReceivers per interfaces as documented in section 3 above. This might help to limit the threads for a “problematic” receiver backend having multiple interfaces at the same time.

              Regards,
              Mike

              (0) 
  15. Former Member

    Hello Mike,

    Have gone through the blog and found it very useful.

    I have one question for which i did not find the answer yet.

    We have 5 receiving JDBC communication channels for different messages pointing to same SQL server. When messages are triggered, JDBC takes messages FIFO basis. Is this standard behaviour? We are processing huge data volume for start up( around 40K products, 20K customers). The customer messages have to wait till product messages are finished.

    Is there a parallel mechanism for JDBC adapter? Do I need to set the parameter queueparallelism.maxReceivers?

    Thanks,

    Sunil Joyous

    (0) 
      1. Former Member

        Thanks Praveen. The thead suggested by you works for per interface.

        We are on dual stack. We have 5 different interfaces using 5 JDBC communication channel . I want parrallisation for JDBC adapter level.

        If I refere to point number 2 on above blog, it is suggested to set the parameter maxReceivers. This allows for classical dual stack based scenarion to restrict the number of worker threads based on receiver party.

        I have have figured out where it set parameter maxReceivers but I am not able to find where to mention receiver party?

        Regards,,

        Sunil

        (0) 
  16. J. Jansen

    Hi Mike,

    First of all: great blog. It has sincerely helped us tackle some major issues one of our clients was facing with regards to some high volume and bad performing synchronous interfaces (ICO). However, now we are looking to apply the max. receiver and max. receiver per interface parameter as well for asynchronous ICO’s. However, we don’t see the expected results, as opposed to the behaviour we see with the synchronous ICO’s, namely restricted use of outbound consumer threads in the SOAP adapter.

    In our development system (7.4) we have setup the system for asynchronous ICO only.

    When we set the per interface receiver to one, still more then one consumer thread is being consumed.

    SAPCall1Parallel.png

    We started flooding the system with a SOAP UI loadtest at 12:25 and stopped it at 12:30. Although worker threads never top 3 consumed threads, we would expect it not to top 1.

    Below you see a graph of our acceptance system, where we have set the per interface parameter to 8. As you can see, the rule is enforced properly. It never tops 8.

    SAPCall8ParallelAcceptance.png

    Is there a difference in how the mechanism works for asynchronous scenarios as opposed to synchronous? I would expect the same behavior.

    Only difference in the JAVA system properties is that queueTypes is set to “icosynchronous” in acceptance and “icoasynchronous” in development.

    Kind regards,

    Jeroen

    (0) 
    1. J. Jansen

      I found out that asynchrous messages in a ICO scenario use two threads on the sender adapter. The first is simply to receive the message (i only tried a asynchronous ICO with SOAP sender just to be clear), and the other one is for actually processing the message. So i was confused by the fact that more threads were being consumed then configured in the parallelization configuration. However, it is clear that when the initial load (burst mode) has created a backlog and no more messages are being sent to the adapter (first thread), during the processing of the backlog, only the configured amount of threads (second thread) are being consumed.

      Regards,

      Jeroen

      (0) 
  17. Former Member

    Hi Mike/All,

    Very informative blog and forum.

    By the way, we wanted to set a size protection limit for large messages in our adapter engine and I came across messaging system queue functionality.


    We wanted to implement the below configurations but we have to know first if the below configurations are used only for ICO?

    NWA Configuration > Infrastructure > Java System Properties > Services > XPI Service: Messaging System

    Configurations

    1. messaging.largemessage.enabled

    TRUE

    1. messaging.largemessage.threshold

    10 MB

    1. messaging.largemessage.permits

    10

    1. messaging.largemessage.blacklistXLMessage

    TRUE

    1. messaging.largemessage.queueTypes

    Send,Recv


    Or are there other features in PI 7.31 SPS14 wherein we can prioritize /isolate large messages in adapter engine?


    Let us know your inputs/thoughts on this.


    Thanks!

    (0) 
    1. Michael Appleby

      Unless you are asking for clarification/correction of some part of the Document, please create a new Discussion marked as a Question.  The Comments section of a Blog (or Document) is not the right vehicle for asking questions as the results are not easily searchable.  Once your issue is solved, a Discussion with the solution (and marked with Correct Answer) makes the results visible to others experiencing a similar problem.  If a blog or document is related, put in a link.  Read the Getting Started documents (link at the top right) including the Rules of Engagement. 

      NOTE: Getting the link is easy enough for both the author and Blog.  Simply MouseOver the item, Right Click, and select Copy Shortcut.  Paste it into your Discussion.  You can also click on the url after pasting.  Click on the A to expand the options and select T (on the right) to Auto-Title the url.

      Thanks, Mike (Moderator)

      SAP Technology RIG

      (0) 
  18. Former Member

    Hi,

    Great Blog.

    So,sender thread have more work to do in single stack.Is it better to assign more thread to sender than receiver to improve performance ??

    Thanks in advance.

    (0) 
    1. Michael Appleby

      Unless you are asking for clarification/correction of some part of the Document, please create a new Discussion marked as a Question.  The Comments section of a Blog (or Document) is not the right vehicle for asking questions as the results are not easily searchable.  Once your issue is solved, a Discussion with the solution (and marked with Correct Answer) makes the results visible to others experiencing a similar problem.  If a blog or document is related, put in a link.  Read the Getting Started documents (link at the top right) including the Rules of Engagement. 

      NOTE: Getting the link is easy enough for both the author and Blog.  Simply MouseOver the item, Right Click, and select Copy Shortcut.  Paste it into your Discussion.  You can also click on the url after pasting.  Click on the A to expand the options and select T (on the right) to Auto-Title the url.

      Thanks, Mike (Moderator)

      SAP Technology RIG

      (0) 

Leave a Reply