Cloud Applications Studio MDRO: FAQs
Dear Community,
This is a list of common questions on MDRO related to the SAP Cloud Applications Studio, that we frequently receive from partners and customers. I have tried to answer them, and will keep adding to this list.
Note: Every time you see “job” below, read it as “MDRO job”
- Why do jobs fail with error- “Your user is assigned to a different solution. Please use a valid user.”?
This issue occurs when a job instance is scheduled with a PDI user rather than a Business user. The reason being that when a PDI user is used, a solution is assigned to the user depending on the PDI process. So, if the solution assigned to the PDI user during a job execution is not the same as that of the MDRO’s solution, you see this error.
We recommend you to always use a Business user when scheduling an MDRO instance. - Why do jobs fail with error- “Error during invocation of FPP service GET_MDRO_FPP; action failed”?
The above error occurs when a user(s) creates multiple jobs and schedules them to execute at the same time resulting in duplicate jobs. In this case, one of the job execution acquires lock on the MDRO while the other fails to do so.
To resolve this, click on the “View Jobs” button after selecting an MDRO instance on the MDRO OWL and delete duplicate jobs. - Why does the status of an Application Log instance show “In Process” even though
there is no background job running?
When a job execution fails with a dump, the MDRO framework doesn’t get the control back from the application (PDI for example). Because of this, MDRO framework doesn’t update the status of the Application log and it is displayed as “In Process”.
To find the correct status of the log, select the required MDRO instance-> click on “View Jobs”-> Select “Run Jobs” from the “Schedule Jobs” drop down. The status of the job tells you the correct status.
To avoid situations like these, make sure you handle the MDRO’s action logic to avoid dumps. You can also achieve this by handling the “Avoid Dumps” suggestions from the “Performance Tips” on the action file. - Why do jobs fail with error- “Action EXECUTE not possible; action is disabled”?
Job execution is only possible if the status of the MDRO instance is “Active”. The above error occurs when an MDRO instance with existing jobs is changed resulting in the change of its status to “In Revision”. When a job execution starts for such instances, you see the above error.
Make sure you make the MDRO instance “Active” after finishing changes. - Why is the Application log empty sometimes?
Make sure the MDRO’s selection parameters retrieve at least 1 BO instance. If there are no instances to execute the action on, there will be nothing to write in the application log as well. - How do I make the best use of “Parallel Processing” feature?
Parallel processing works the best in below cases
– The MDRO selection parameters would retrieve huge number of records to process.
– Remember the action logic doesn’t matter for parallel processing to work. If you query 1 million records in the action logic, but the MDRO selection parameters retrieve only 10, then parallel processing won’t work. It’s only the number of instances retrieved via MDRO selection parameters that count.
– Don’t enable parallel processing for MDROs where there is a chance of 2 or more instances changing the same object (standard or custom). This creates locking issues resulting in job failure - How can I increase the performance of the MDRO?
Performance of a job heavily depends on the action logic. So, make sure the action code is optimized. Use the “Performance Tips” feature available on the context menu of ABSL file to achieve the same. - Why are only some or none of my instances got saved even though only a few failed?
When parallel processing is not enabled i.e., sequential processing, all the instances are placed inside a package for execution. A save is triggered only at the end of the package. So, even if one of the instances fails the whole package fails and none of the instances get saved.
When parallel processing is enabled, save happens only at the end the package. The only difference being that multiple packages with 50 instances each are created which are executed in parallel. So, even if one of the 50 instances in a package fails, save doesn’t happen for the whole package. - Can I change the package size of the MDRO in case parallel processing is enabled?
Package size of 50 instances is fixed in case of PDI MDRO and can’t be changed.
Dear Sasi Kanth Velagaleti,
Thanks for sharing informtion.
It is userful for us.
Best Regards,
Zun
Hello Sasi Kanth,
I have a few questions I hope you might be able to answer...
A background job times out after 10 hours - is it possible to increase this time limit to 15 hours?
Is parallel processing applied on all Query executions in a script?
Is parallel processing applied to foreach loops?
Best Regards
Jay
Hi Jay,
Please find the answers in the same order as the questions.
I hope this makes things clear for you.
Best Regards,
Sasi
Thank you, Sasi, for your speedy response!
Based on your response to the 2nd and 3rd questions, I guess parallel processing is not relevant in my case then as my query executes an action against a single instance of a custom business object.
For the first question, I will have to identify more code improvements or separate out the processes.
Best Regards
Jay
Dear Sasi Kanth & All SAP users,
I too thought that sequential processing would pick up all data in one package. However, today I witnessed that when I had a data set of 30K records, they were split into a package size of 5K.
Any suggestions please.
Package size of 50K for sequential processing
Hi Myilraja,
Your observation is right. The data is split into sets of 5000 each for performance reasons. This is the same for both parallel as well as sequential processing but, the differences between these two start here.
During sequential processing, all the instances in one set of 5000 are run one by one (sequential). Once a set of 5000 is processed, the next 5000 set is picked.
However, during parallel processing, the instances in one set of 5000 are further divided into 6 packages of 50 each. The remaining 4700 wait in line to be packaged after these 6 packages are processed. These 6 packages are run in parallel i.e at a given time a maximum of 6 instances (one instance each from the 6 packages running in parallel) are processed until the 5000 instances are done and then the next set of 5000 are picked.
Hope this clarifies,
Best Regards, Sasi.