Skip to Content
Technical Articles
Author's profile photo Daniil Mazurkevich

Some EWM Performance Bottlenecks

In this blog post I would like to share my experience regarding possible performance bottlenecks in EWM which we had to tackle at some projects, especially with high percent of automatization in the process.

I would be glad to have some more input in the comments.

It is pretty often when Performance is one of topics in EWM, it could be long time needed for Putaway warehouse task creation, it could be some delays in RF operations, or it could be some delays in MFS Telegrams processing, where you need to have fast responses to make it work well with Physical movements.

One if the goods examples would be processing of high number (let it be one hundred) of deliveries, which are distributed from ERP. It could be Inbound deliveries with auto GR posting / WT creation or as well Outbound deliveries for consumption posting, either way it could be interesting 🙂

First bottleneck which could be here comes from the distribution, you get qRFC records for 100 deliveries which are distributed from VL06I/COGI, and you get something like InB_Res_Lack when reaching pRFCs: 73,hitting times or even (real every day example) 475 ,hitting times.

If during this “high load” time users are producing one or several qRFC records and waiting for the result (e.g., Close HU Step (or return function from the work center) > qRFC record > WT > PPF Printout from WHO), sorry dear User, system has X deliveries which are came earlier therefore you will have to wait a little bit 🙂

And lets imagine our first bottleneck is almost behind, and we get 50 deliveries and other 50 is almost there, and this is a PPF time, good if Delivery Notifications are disabled ( Thanks “Skip Request” function in S4HANA ), in this case it is little bit less to process, but still, if it is Inbound delivery with GR + WT:

  1. PPF with ERP Message (delivery change, optional, but could be) > tRFC > qRFC OUT
  2. PPF for GR > tRFC > qRFC IN
  3. PPF for ERP Message (GR) > tRFC > qRFC OUT
  4. PPF for WT Creation > tRFC > qRFC IN
  5. PPF for WHO (optional)

So, in such scenario it is easy to experience resource shortage in Dialog Processes or in RFC connections. It as well extra load for tRFC*, qRFC*, aRFC* tables (trfcqstate, arfcrstate, arfcrdata …), see note 539917 for additional information.

Second important bottleneck is a lock issue on the storage bins / HUs level.

If you need to have actual capacity of source or destination bins, storage bin should be locked and since all works in parallel process you could get locking issue on /scwm/lagp.

If you do not use capacity calculation (deactivated on Storage Type level) you could get same issue if you post your stock without HUs, system locks /SCWM/HUHDR table for this with parent stock guid which is a storage bin if HUs are not in use (loose stock). Below you can see an example of such lock in the system (sorry for screenshot on German), and since amount of update processes in the system are lower as amount of dialog processes long running update processes lead to problem for Dialog users.


HUHDR Lock in Update process

What can be done to avoid high load peeks on EWM Side?

    1. Simplest way would be to slow down the distribution on ERP Side e.g., if you distribute documents with a background job (Reports WS_MONITOR_*_DEL_DIST ) you can do it with number ranges (*0, *1, … *9), in this case you can distribute the loading in a smaller portions
    2. When documents are distributed, you can slowdown processing on EWM side.
      1. You can wait till SAP develops extended functionality of the Inbound Scheduler to define maximum number of running processes for the registered queue prefix. I’ll try to update this post when it happens, or at least I would comment it.
      2. Another option could be to process Queues with own logic. It could work fairly good as well, we have used such way on one of the projects, queue prefix was deregistered in SMQR, and queues are processed with an infinitely running Job
      3. Theoretically separate logon group/instance can be used to limit number of running processes, but I found this too expensive, and
      4. I do not see Queue aggregation as a solution, because often queues should be processed independently, and error in one queue should not affect processing of other queues.
    1. Slower process of PPF actions can be done in a similar way as delivery distribution (with a number ranges), but it brings one more benefit in comparison with usual “Processing during saving” – all processed actions are running in one process and it is a BTC process, so if you execute PPF for 20 documents, it still be queued in one process and. One disadvantage – it be cross-warehouse if you use it for deliveries. And custom solutions for PPF should be maybe evaluated for proper clean-ups.
    2. Typical performance advices:
      1. Have a good Basis/IT team who uses alerting mechanism and not only react when system is not usable anymore (Alerting on AS and on DB). Unfortunately, performance problem could arise long after go-Live and Hypercare. A lot of problems could be prevented, but for this system should be monitored and maintained.
      2. Regularly Install Updates and patches (Database, Core patches), it sounds like wash your hands, but from time-to-time System is too old, and some performance bug fixes are not implemented, and System could be slow but bearable, in this case nobody takes any actions
      3. Logs/Parallel processing should be set correct (deactivate unneeded logs, activate asynchronous processing, if possible, set proper values in parallel processing)
      4. Keep in mind performance Notes during design (e.g., avoid storage bins with many quants without HUs)

    Here is one example of the database problem during delivery selection/ Partitioning of the /SCDL/* tables could probably help in this case, and actually sometimes Basis team would do it before it comes to performance problems, when reading of one delivery with 5 items takes 10+ seconds.


And some performance notes:

1896480 – High Throughput Processes – Best Practices (see reference notes as well)

1423066 – Optimization of the performance in EWM (see reference notes as well)

1896197 – Sizing of Extended Warehouse Management – Best Practices

Notes which I mentioned in the blog post Performance of PMR Processing2747733 & 3146978 )

So, from this blog post you learned some possible problems and theoretical solutions for high loaded ERP-EWM Integration.

I would be happy to get some feedback and maybe you can share some of your performance problems in the comments.



Assigned Tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.