Spot potential performance problem IN ADVANCE
Having more than 7 years experience at SAP resolving problems against SAP Manufacturing Execution Solution, I can tell that one of the most difficult tasks is to predict the performance yet before the go-live. You have to spot a potential performance problem in advance to not deal with it in a production system, when any downtime costs you $.
I bet you know already about sizing spreadsheet, DB parameters and other recommendations provided in SAP guides and Notes. However, is it really enough to predict the performance of your particular scenario(s) and configuration? Will your custom extension perform fast enough in production load for instance?
I tried to find some solution for this question, tried to find a way to simulate the production load for specific scenario(s) and want to share with you the approach I tried.
The key role in this approach belongs to SAP PCo. Since its newest release 15.1, it has become really simple to configure an Enhanced Notification with multiple destination systems and handle the response messages.
Here is a screenshot that illustrates the test scenario I configured:
Let me explain the screenshot and the scenario I tested:
- The scenario is started/triggered by the update of the tag on the 3rd party data server (i.e. OPC data server). Basically, it represents the SAP ME Resource (machine/robot) on the shop floor.
- In my scenario it triggers Start By Item request to release a new SFC. The important step here is to handle the response message, which contains the SFC number that has been released. For many cases (and this one in particular) a HANDLE string with this SFC number will be needed later, so I wrote some C# code to concatenate strings to build the HANDLE that will be used later.
- The next step is to collect data against the Resource itself. Also taken from the 3rd party data server. This step is rather to show that you can add any steps to the scenario you test, that you can easily consume data from the machines on the shop floor by SAP ME in particular.
- And the last step in this scenario is to complete the SFC, which has been started at step #2.
The whole scenario Release SFC -> Collect Data -> Complete SFC is powered by the custom dll, in which I hard coded the sequence of actions described above and where any logic can be implemented too in order to represent exactly the same scenario that will be executed in production.
The question you may still have at this stage: it is clear now how SAP ME scenario can be pre-configured in PCo, but how to simulate the load? The answer is: by means of data server and its client. You can define the frequency the tag is changed (in other words the clock cycle of the Resource\of the machine on the shop floor) and hence the frequency the scenario is triggered. Eventually you can loop it as many times and as often as you need.
As you can imagine, it is possible to configure several scenarios and test them simultaneously as in a real production system on different production lines.
This approach can help to:
- estimate the response times of SAP ME;
- estimate the number of records in the tables;
- define the clock cycle of your production line;
So, to spot the potential problem in advance and hence avoid it in production.
I hope you find this blog post useful. And if so, I would highly appreciate both: your Likes\Ratings to this blog post as well as comments on the following:
- Would you use this approach? Why yes, why no?
- If yes, would you like SAP to prepare custom DLLs for your particular scenario(s) on request?
- Do you use another approach to test production load and estimate the performance before go-live? If yes, please, explain it.
I wish to hear from you and turn this blog post into a conversation, because your comments can help to enhance this approach.
Best regards,
Alex.
Hi Alex
I think that any tool that helps to identify performance issues before they occur is beneficial. Whether this approach is suitable probably depends on a lot of factors, including the production model and types and size of data to be recorded by SAP ME.
Typically, I find that customers using the ME product in very different ways will experience quite different issues overall.
For the last few performance problems I've encountered at least, your approach would not have uncovered the main performance issues because:
1) the issues occurred in core plugins to the operator POD (even with modest master data volumes and virtually not historical transactional data)
2) the issues occurred in custom plugins to the POD, but only with significant transactional data already present
So, without interacting directly with the POD at least, these would not have been found.
I've never really observed the collection of large amounts of parametric data to have any significant performance impact in itself, regardless of the data size, even with hundreds of measures collected per test instance. Inserting this data into tables tends to be reasonably efficiently handled by SAP ME, but there are many more significant areas of inefficient core code executing, especially when interacting with the users.
The sizing spreadsheet omits to take account of so much potential ME activity that I find it is of very limited value for actual "transaction" estimation (whatever your definition of a transaction may be). For example, the "message" table and its associated indexes and tables represent the largest WIP data set for one customer, and the transaction logging of web service calls represents the largest set in ODS for another, and neither of these are included in the ME sizing spreadsheet. Additional loading on the server caused by MEINT (especially yield confirmations, which may or may not be correlated), the execution of core or custom reports, archiving, message deletion etc are also not covered, yet do have a significant impact on overall performance.
For raw performance testing, you can get a lot of data back from using the "SOAP UI" tool by Smartbear. There, you can create "Testcases" and this can record a large amount of performance data for later analysis. Of course, this is then only performing Webservice calls to SAP ME, and would still not uncover the GUI issues noted above, but it's flexible in that you can call any webservice that represents the functionality you intend to use in production. I use the DM520 hook to introduce the SFC numbers to the system (so SOAP UI is the SFC number source and ME is not in control of the numbering). You can also create the XML needed for the SOAP UI testcases via an Excel spreadsheet with formulas, but all of this takes some time. However, for simple cases like start SFC, data collect and complete SFC, this is quite easy to do and can be multithreaded to load the server in a more realistic manner.
Customers that do not use PCo would also not be able to take advantage of your solution, but those that do may have an interest in your offering. It's a pity others have not expressed an opinion here however, after all your hard work.
Thank you Stuart,
For your valuable feedback. I hope we hear more comments from SCN community members especially after your input.
In the meantime, I’d like to comment\build on top of what you have written. Here are some of the points:
However, this approach can help to fill ME tables with significant volume of transactional data exactly as it will happen in production (not only to see how ME will perform to handle these requests and not just to mass load some dummy data, but really differentiate on what operation/resource/routing steps will be eventually more data than on another) and in quite short term, because it is up to you to define the frequency as mentioned in the blog itself. Having this data, the testing of GUI of each workstation (pre-defined POD for that operation/resource/routing step) becomes really meaningful.
Nevertheless, I would be really curios to hear about your experience of using SOAP UI tools for automatic testing of SAP ME. Should you have some time some day to write a Blog about it, I will be one of the first ones who reads it through 😉
Br, Alex.