Skip to Content

Performance of SAP Gateway

I decided to test the performance of SAP Gateway with jMeter. As the results are quite useless if not compared to a similar solution, I decided to use Jersey for that. Why Jersey? SAP Gateway allows you to access SAP data via ODATA (=REST + Atom) and Jersey allows you to access data via REST + XML/JSON. As long as you do not really depend on ODATA both offer the same functionality: access to data using REST.

I suspected that Gateway outperformes Jersey as for the Jersey test I had to run 2 virtual machines and Jersey also runs inside NetWeaver AS Java, CAF is used to connect to the backend (JCo) and Jersey consumes CAF using EJB. I didn’t do a scientific test as I used my local virtual machines, but the tendency of course is valid. For the Gateway VM I used the trial version available for download here at SCN. The Jersey setup is more complicated as Jersey is a Java application that needs Java 6. I used a virtual machine of NetWeaver CE 7.2 (that download is no longer available at SCN) and connected it to the same Gateway server.

Test run

The test was performed using JMeter and consisted of getting the list of all available flights and then call the details of every flight. As there are 400 flights in the backend, it sums up to 401 connections. The actual data size transmitted cannot be compared as Gateway transmitted ODATA with Atom and that contains way more information than Jersey with JSON. The average size of the response for the detail flight information is 2560 bytes with Gateway. This sumes up to slighlty more than 1 MB of data transmitted for the test./wp-content/uploads/2012/06/gwperf1_114417.png

File size of the detail flight requests returned from SAP Gateway.


For Gateway the URLs were basically the same as for the public available Gateway demo system:

sap/opu/sdata/iwfnd/RMTSAMPLEFLIGHT/FlightCollection(carrid=’AA’,connid=’0017′,fldate=’20110601′) to get the details of a flight. The HTTP content returned from Gateway lookes like:

/wp-content/uploads/2012/06/gwperf2_114418.jpg

OData Atom feed content from SAP Gateway.


For Jersey I developed a J2EE application that takes the parameters of a single flight in the URL: demo.sap.com~test~bean~rest~web/tobias/getFlight/carrid/AA/connid/0017/fldate/20110601. The response for a flight as JSON:

/wp-content/uploads/2012/06/gwperf3_114479.jpg

JSON response from Jersey of the BAPI.


The file size for all flights is 111745 bytes and for the details of a flight it is around 846 bytes. This sums up for less than 1/2 MB for all the data with Jersey and JSON.

/wp-content/uploads/2012/06/gwperf4_114480.jpg

File size of the detail flight requests returned from Jersey.

The payload shows that the atom format is too large compared to JSON. Even when the actual JSON response for OData will be larger than in my Jersey application, it still will be significant smaller than atom. Specially when you have a mobile application that uses the OData SDK for Sybase, receiving 400 flight updates throughout a day means: you have to transmit and receive 1 MB per day.

A test run consisted of setting a number of concurrent threads and then go through the 401 requests. As an example, for Gateway the result reported by jMeter was for 20 threads:

/wp-content/uploads/2012/06/gwperf5_114481.jpg

The same test run for Jersey:

/wp-content/uploads/2012/06/gwperf6_114482.jpg

The result

The actual data was gathered running the test of 1, 10 and 20 threads and measure the time it took to complete the test. Additionally I added the average number of server hits.

/wp-content/uploads/2012/06/gwperf7_114483.jpg

The data shows that my Jersey application outperforms Gateway from the beginning on and benefits from increased number concurrent requests while Gateway stagnates. To me, SAP Gateway performs unexpectedly bad against Jersey. While Gateway does not improve the total time needed when adding more threads, Jersey does what you expect: the total time needed to complete all 401 requests gets lower. From 35 seconds down to 10.

The question I do have now is: why?  And for to get Jersey running I had to start another virtual machine. And NetWeaver AS Java. And CAF. So: what is happening here? Why is Gateway so slow? Is it because of OData? The Gateway architecture? The implementation? The trial version? The configuration that comes with the trial?

Update: applied some performance tuning tips

Dialog worker processes is set to 20 (1 dia blocked by me logged in to monitor the system, makes 19 available):

/wp-content/uploads/2012/06/screenshot_5_115057.jpg

SM50:

/wp-content/uploads/2012/06/screenshot_6_115058.jpg

ICM status after I executed the jMeter test 2 times to “fill the cache”

/wp-content/uploads/2012/06/screenshot_1_115055.jpg

Running the test with 20 concurrent tests with jMeter:

/wp-content/uploads/2012/06/screenshot_3_115056.jpg

Worker processes are used:

/wp-content/uploads/2012/06/screenshot_8_115059.jpg

Result in jMeter:

/wp-content/uploads/2012/06/screenshot_9_115060.jpg

I run the test 5 times, 25 Seconds was the best result. That’s ~7 seconds faster and still ~14 seconds slower than with Jersey.

Update 2

Applied more tuning parameters. Here are the values of the Gateway SP2 trial I’m using. Still downloading the SP3 trial to test the impact of sdata vs odata.

icm/max_threads: 250

icm/max_conn: 500

rdisp/wp_no_dia: 30

rdisp/tm_max_no: Current value 200

rdisp/max_comm_entries: Current value 500

CPUs: 2

Memory: 2GB

/wp-content/uploads/2012/06/screenshot_3_115056.jpg

Test test takes now 17 seconds, only 7 seconds slow than Jersey, still using the slower sdata. As soon as I can run the test with a Gateway version that uses odata I will post the results.

Update 3

Downloaded Gateway SP3 that supports odata. The parameters are the same as in the other test runs. I don’t know why, but there are always 4 samples that give an ABAP error, but the rest works fine. I simply decided to ignore the 4 error requests as they hit a timeout and distort the result (big jump in the time needed to run the test: from ~15 seconds to > 1 minute). The best result I got was 14 seconds with odata:

/sap/opu/odata/iwfnd/RMTSAMPLEFLIGHT/FlightCollection

/sap/opu/odata/iwfnd/RMTSAMPLEFLIGHT/FlightCollection(carrid=’AA’,connid=’0017′,fldate=datetime’2012-01-11T00%3A00%3A00′)

Switching from sdata to odata gives a performance improvement of 3 seconds.

/wp-content/uploads/2012/06/14seconds_116323.jpg

I guess to further improve the performance it is:

  • separate Gateway from ABAP (remote)
  • deep dive into the ICM performance parameters
  • More CPU, RAM => spend more money

Update 4


While I’m trying to figure out how to get my hands on a Gateway SP4 for testing the impact on Atom vs JSON I decided to improve in the meantime the performance of my Jersey Java application. After all, the 4 Seconds posted by David are impressive. After a little tuning of my application:

/wp-content/uploads/2012/06/jersey1second_116694.jpg

CPU usage suggests that a better result should be possible (after all, 2 VMs running in the background):

jerseyCPUusage.jpg

The test took 1,108 Seconds, not bad for Java. But the trick I applied maybe isn’t usable in all scenarios 🙂

13 Comments
You must be Logged on to comment or reply to a post.
  • I’m very curious whether it’s the payload that makes the difference. Gateway performs at ~half the speed of Jersey, but also has twice as much data to process.

    What would be the results look like if you switch Jersey to XML and try to add some dummy elements, to make it as chatty as gateway/oData?

    Have you experimented with odata4j by the way? It would be nice to add it do your benchmarks I think…

    • The Jersey JSON is already quite chatty, I don’t rule out any BAPI response. But I thought about that after looking at the 1 thread case, but the OData response time is more or less the same, even with 20 concurrent threads. Somewhere there has to be a bottleneck.

      In other local tests (not with Gateway, but with Java, Portal) the actual payload didn’t really affected the result, as it’s a local connection. The 0.5 MB more that Gateway transmits can hardly increase the time so much (10s vs. 30s). What normally increases the time is the total number of connections, but in both cases they are equal: 401.

  • Hi Tobias,

    SP4 for SAP NetWeaver Gateway which was released a few weeks back supports JSON. Using JSON makes the response size about 2.5x smaller in comparison to ATOM/XML. This definitely has a possitive impact on performance. Would be interesting to see the performance numbers in your test scenario when using JSON on the GW side.

    Cheers,
    Jeff

      • No, didn’t notice you were using the trail version…I think that is only up to SP3. SP4 is available on the Service MarketPlace.

        • I can wait 🙂

          Any idea why firing 1, 10 or 20 threads isn’t improving the overall time needed? Is ICF single threaded and blocks?

          • It all depends on the configuration of the AS ABAP. I don’t have that trial version installed but I am sure it is very minimal configuration and not configured for performance. I wouldn’t put too much credence in the performance #’s gathered from it. The ICF is built on top of the Internet Communication Manager (ICM) which is the ABAP web server…it has multiple threads and in turn hands off processing to one of the ABAP dispatcher processes where the real work happens. Take a look at the ICM (tcode – SMICM) and see how many threads are configured (I think 10 is the default)…more likely though there aren’t enough dispatcher processes available to handle all the incoming requests…dispatcher processes are not single threaded so you need a bunch of the running…reason for dispatcher processes not being single threaded is that we don’t want to crash the server if one user/program causes a problem. Anyway, you can see how many dispatcher processes there are in tcode SM50, there you will see processes marked as “DIA” which are dispatcher processes.

          • Hi Jeff,

            I changed the configuration (20 dia) and updated the blog with the new results.Is the configuration of the trial now more realistic? Any more hints / ideas?

          • Hi Tobias,

            no expert on that image…there are lots of different ICM settings that can be changed (shared memory segements and so on), but no idea how that image is configured in regards to this. Sorry about that. I can say that I have seen performance numbers from SAP tests and they are very good. Nothing I can share here though other than saying that. I’ll forward this thread on to some of the developers and see if they have any input.

            Cheers,
            Jeff

  • Hello Tobias,

    I’m responsible for Gateway performance activities.

    I found 3 main topics in the post where you found Gateway performance is worse than Jersey:

    1. Single user results (performance of single thread – 34 sec vs.38 sec total time for executing 401 requests).
    2. Data size transmitted (1/2 MB vs. 1 MB).
    3. Gateway scalability with concurrent threads/requests parameter.

    Please find my comments in regards to the each of the topics above:

    1. As it was mentioned by Jeff above, Gateway supports JSON format starting 2.0 SP4. The response time of single operation using JSON is better and therefore it must be compared JSON to JSON and not JSON to ATOM.

    In addition, you executed Gateway service in Compatibility mode and not in Standard mode recommended by Gateway and it is default for all new applications. Standard Mode uses an updated OData library developed in SAP with much better performance and extended features.

    Compatibility mode: sap/opu/sdata/iwfnd/RMTSAMPLEFLIGHT/FlightCollection(carrid=’AA’,connid=’0017′,fldate=’20110601′)

    Standard mode: sap/opu/odata/iwfnd/RMTSAMPLEFLIGHT/FlightCollection(carrid=’AA’,connid=’0017′,fldate=’20110601′)

    The standard mode is available starting Gateway 2.0 SP3.

    So, by executing GW service with JSON format and Standard mode the performance results will be better.

    2. The data size transmitted by Gateway once using JSON is the same as by Jersey.

    3. I tested the same use case (same service) on Gateway performance landscape with 2 options: (a) Local Gateway (on the same machine as ERP)  and (b) Remote Gateway (2 machines configuration, Gateway and Backend).

    The results on Gateway performance landscape indicate that the solution is scalable. (Gateway Remote implementation results are shown).

    The actual data was gathered running the test of 1, 5, 10 and 20 threads and measure the time it took to complete 401 operations. Additionally I added the total transactions per second.

    http://img560.imageshack.us/img560/8633/totaltimetranspersecond.png

      

    Please find the test results for 20 threads execution, when I merged “total transactions per second”, running users(threads), average response time for 1 query call for 400 entries and 400 additional calls to read flight details.

    http://img823.imageshack.us/img823/230/totaltranspersecondrunn.png

    The following settings can lead to the bottleneck on your landscape by executing GW flows:

    1. Number of machine cores.
    2. Number of ICM connections. This controlled with icm/max_conn and icm/max_threads parameters. The value of these parameters (aggregated) must be higher than the number of concurrent processes that access ICM.
    3. Number of DIA processes. This controlled with rdisp/wp_no_dia parameter. The value of this parameter must be higher than the number of concurrent processes that access Dispatcher.
    4. Maximum number of users per instance. This controlled with rdisp/tm_max_no parameter.
    5. Maximum number of communication entries on an application server. rdisp/max_comm_entries  parameter allows you to control the number of RFC/CPIC connections on an application server. Every RFC or CPIC communication with a partner program requires an entry. If the initiator and recipient of an RFC/CPIC program are running on the same application server, two entries per communication are required.
    6. Memory. Please ensure that there are no swaps with transaction st02.

    You can also download Gateway 2.0 SP4 sizing guide from the Service Market Place to get more details.

    Thanks,

    David

    • Hi David,

      my Gateway trial is without odata, only sdata. As soon as I downloaded and installed the SP3 Gateway trial available her on SCN I will run the test again, applying the performance tips you gave.

      The data you shared is way more what I expected how Gateway performs.

      Atom vs. JSON. As stated, it should not have a big impact on the overall performance. The problem is here for the early adopters that are using Gateway with atom and not JSON, as the data transmitted can be significantly higher than with JSON and they have to patch their system to SP4 and adjust their code to use JSON.

  • Hi all

    Interesting blog.

    Just started to review the new Sizing Guideline (Document Version: either 1.0 as in the front page or 1.6(?) as per page 3 from 2012-12-11), SAP NetWeaver Gateway 2.0 SP06.

    More to go soon …

    Adi