Technical Articles
ABAP tips: Performance of shared memory objects
So I go into a discussion about the performance impact of using Shared Memory Objects. For those not familiar with the concept, it’s basically an object instance that can be accessed by all users logged onto the same application server.
Obviously there will be some performance costs, but the question we were trying to answer is how big a penalty it incurs. In the end I wrote a little test program and thought I’d share the results.
Findings
- Accessing a Shared Memory Object component in readonly mode is a little over 50 times slower than using an instance in session memory.
- Accessing the same object in update mode is more than 5 times slower than readonly mode, in total 300 times slower than a regular object instance.
Test Results
I tested a million method calls using a standard call, readonly and update mode. Each iteration will instantiate / attach the object.
UPDATE: I also added another test shm_attach_once, where the instance is attached before the loop. This demonstrates that once attached, the performance is pretty much the same as a regular object.
Test results
If anyone is interested in my test source, it’s over here. Note that it is not a best practice example but just a quick and dirty implementation with minimal code, no error handling or conditional instantiation etc.
Conclusion
My takeaway from this is:
- For readonly access, I wouldn’t worry about performance. A common reason to use Shared Memory Objects is to provide features that would otherwise be implemented in the DB, so it will always be faster than that, and is still a very fast way to persist data across sessions.
- For write access it’s wise to put a little more thought into it. Don’t use attach_for_write unless you need it. Use common sense and don’t do it inside a repeated piece of code such as loops; instead, attach once beforehand and commit after the loop. Stick to that and the 55ms that it takes on my laptop is unlikely to be a dealbreaker.
Good tips! I have yet to find a practical use case for this. 🙂
A good use case is frequent but expensive calculations or queries of rarely changing data.
Real world example: SRM applications use Purchasing Org master data from ECC. We could replicate this into a custom table with jobs. But instead we can just use a shared memory object instance available to all users:
The point is that all users have access to the same instance, and therefore see the same mt_purch_org attribute. No DB tables, no replication jobs, faster response, less code and simpler logic.
Only if they are logged in to the same application server (AS). If the prod system has multiple AS, SHM objects can be a problem.
Just like Jelena Perfiljeva i haven’t used them in productive code because of the limitation as mentioned above.
Most of the (On Prem) projects i have worked on had multi AS in Prod, therefore i had to use INDX-like tables to read/write the data in the memory.
You’re right, and I specifically wrote in the blog “all users logged onto the same application server.”
However that is not necessarily a problem. In my example above there will be an instance per application server, but that’s still preferable to thousands of queries each day. It all depends on use case and design.
Tbh, I really like the "clean"liness of the solution. With the INDX-like tables we had to program deletion jobs, not an elegant solution.
Now, this is interesting. How did you achieve it?
There’s nothing to achieve, it just happens that way
If an instance does not exist on a server, the next person or job that tries to use it will instantiate it, and all others on the same server will attach to it.
This can be coded into a factory method so developers implementing the object don’t need to worry about the shared memory aspect at all. To the using developer it’s all transparent.
There are some activities the system performs whereby SHM instances are flushed after inactivity periods and stuff like that. Maybe that’s a blog for another day…
Great Thanks.