I decided to test the performance of SAP Gateway with jMeter. As the results are quite useless if not compared to a similar solution, I decided to use Jersey for that. Why Jersey? SAP Gateway allows you to access SAP data via ODATA (=REST + Atom) and Jersey allows you to access data via REST + XML/JSON. As long as you do not really depend on ODATA both offer the same functionality: access to data using REST.
I suspected that Gateway outperformes Jersey as for the Jersey test I had to run 2 virtual machines and Jersey also runs inside NetWeaver AS Java, CAF is used to connect to the backend (JCo) and Jersey consumes CAF using EJB. I didn’t do a scientific test as I used my local virtual machines, but the tendency of course is valid. For the Gateway VM I used the trial version available for download here at SCN. The Jersey setup is more complicated as Jersey is a Java application that needs Java 6. I used a virtual machine of NetWeaver CE 7.2 (that download is no longer available at SCN) and connected it to the same Gateway server.
The test was performed using JMeter and consisted of getting the list of all available flights and then call the details of every flight. As there are 400 flights in the backend, it sums up to 401 connections. The actual data size transmitted cannot be compared as Gateway transmitted ODATA with Atom and that contains way more information than Jersey with JSON. The average size of the response for the detail flight information is 2560 bytes with Gateway. This sumes up to slighlty more than 1 MB of data transmitted for the test.
File size of the detail flight requests returned from SAP Gateway.
For Gateway the URLs were basically the same as for the public available Gateway demo system:
sap/opu/sdata/iwfnd/RMTSAMPLEFLIGHT/FlightCollection(carrid=’AA’,connid=’0017′,fldate=’20110601′) to get the details of a flight. The HTTP content returned from Gateway lookes like:
OData Atom feed content from SAP Gateway.
For Jersey I developed a J2EE application that takes the parameters of a single flight in the URL: demo.sap.com~test~bean~rest~web/tobias/getFlight/carrid/AA/connid/0017/fldate/20110601. The response for a flight as JSON:
JSON response from Jersey of the BAPI.
The file size for all flights is 111745 bytes and for the details of a flight it is around 846 bytes. This sums up for less than 1/2 MB for all the data with Jersey and JSON.
File size of the detail flight requests returned from Jersey.
The payload shows that the atom format is too large compared to JSON. Even when the actual JSON response for OData will be larger than in my Jersey application, it still will be significant smaller than atom. Specially when you have a mobile application that uses the OData SDK for Sybase, receiving 400 flight updates throughout a day means: you have to transmit and receive 1 MB per day.
A test run consisted of setting a number of concurrent threads and then go through the 401 requests. As an example, for Gateway the result reported by jMeter was for 20 threads:
The same test run for Jersey:
The actual data was gathered running the test of 1, 10 and 20 threads and measure the time it took to complete the test. Additionally I added the average number of server hits.
The data shows that my Jersey application outperforms Gateway from the beginning on and benefits from increased number concurrent requests while Gateway stagnates. To me, SAP Gateway performs unexpectedly bad against Jersey. While Gateway does not improve the total time needed when adding more threads, Jersey does what you expect: the total time needed to complete all 401 requests gets lower. From 35 seconds down to 10.
The question I do have now is: why? And for to get Jersey running I had to start another virtual machine. And NetWeaver AS Java. And CAF. So: what is happening here? Why is Gateway so slow? Is it because of OData? The Gateway architecture? The implementation? The trial version? The configuration that comes with the trial?
Update: applied some performance tuning tips
Dialog worker processes is set to 20 (1 dia blocked by me logged in to monitor the system, makes 19 available):
ICM status after I executed the jMeter test 2 times to “fill the cache”
Running the test with 20 concurrent tests with jMeter:
Worker processes are used:
Result in jMeter:
I run the test 5 times, 25 Seconds was the best result. That’s ~7 seconds faster and still ~14 seconds slower than with Jersey.
Applied more tuning parameters. Here are the values of the Gateway SP2 trial I’m using. Still downloading the SP3 trial to test the impact of sdata vs odata.
rdisp/tm_max_no: Current value 200
rdisp/max_comm_entries: Current value 500
Test test takes now 17 seconds, only 7 seconds slow than Jersey, still using the slower sdata. As soon as I can run the test with a Gateway version that uses odata I will post the results.
Downloaded Gateway SP3 that supports odata. The parameters are the same as in the other test runs. I don’t know why, but there are always 4 samples that give an ABAP error, but the rest works fine. I simply decided to ignore the 4 error requests as they hit a timeout and distort the result (big jump in the time needed to run the test: from ~15 seconds to > 1 minute). The best result I got was 14 seconds with odata:
Switching from sdata to odata gives a performance improvement of 3 seconds.
I guess to further improve the performance it is:
- separate Gateway from ABAP (remote)
- deep dive into the ICM performance parameters
- More CPU, RAM => spend more money
While I’m trying to figure out how to get my hands on a Gateway SP4 for testing the impact on Atom vs JSON I decided to improve in the meantime the performance of my Jersey Java application. After all, the 4 Seconds posted by David are impressive. After a little tuning of my application:
CPU usage suggests that a better result should be possible (after all, 2 VMs running in the background):
The test took 1,108 Seconds, not bad for Java. But the trick I applied maybe isn’t usable in all scenarios 🙂