Design Studio Tips and Tricks: Measuring Performance
Before you start reading this…
This blog is about measuring the performance. Measuring is mostly done with the intent to improve performance. Special tips on how to improve the performance is not the main focus of this blog. However, there is a great blog with best practices about Design Studio performance. Check this out:
Design Studio Performance Best-Practices
Why should you use Design Studio’s profiling capabilities?
After you have applied your performance optimizations, you should of course measure if you have really improved the performance of your Design Studio application. Or even better – before you tune the performance you should first measure where the runtime of your applications is spent. In most cases measuring performance can be simply be done by looking at the second hand of your watch. But with Design Studio there’s a way that will give you more insight.
To measure the runtime of Design Studio applications the URL parameter “&PROFILING=X” can be added to the application’s startup URL.
Start your application via the “Execute Locally” action then go to the started browser’s URL edit field and append “&PROFILING=X”:
This will re-start the application and you will see the “Statistics” window on top of your application’s UI:
When you are running your apps on the BI Platform (BIP) or on NetWeaver you can start the profiling via the following three steps:
First select the “Send to Mobile Device” menu item:
Then click on the “Copy URL to clipboard” button:
Now the application URL is in the clipboard. Paste this URL to your browser’s URL field but before hitting the return key append “&PROFILING=X”.
What can you do with the Scripting window?
First let’s look at the upper part of the window.
The most important information is on the first tab “Java Statistics”.
The Design Studio runtime is implemented in Java running either on BIP or NetWeaver. So the most performance critical algorithms run there. The “Java Statistics Raw” contains the same information as “Java Statistics”, but in a more “technically readable” form” (JSON). This tab is typically used when sending information to the SAP support.
The lower part consists of four buttons:
The first button downloads the “Java Statistics” to a CSV file that can be opened in Excel. This is cool for Excel Gurus. I personally prefer the next option that downloads the same information into a text file.
The third button is the “Refresh” button”. This buttonis needed because due to timing issues the rendering of the data shown in the Statistics window has been rendered too fast 😉 This means not all of the performance information is visible. Pressing the “Refresh” button ensures that you see the very latest data. Note that this affects only the information directly in the “Statistics” window. If you press one of the “Download” buttons, then all relevant data is contained in the downloaded files.
The “Close” button (guess what) closes the Statistics window. However, to continue working with the application you don’t need to close the window. By dragging on the “title bar” you can simply move it to a place where it does not bother too much. The window is modeless so you can still click on the application area. You can even resize the window by dragging the lower right edge of the window. If you have closed the window, it will re-appear after you have triggered a server-roundtrip.
What can you do with the statistics information?
After we have downloaded the relevant information as a text file – we’ll get a lot of strange looking information:
Step 0: UID: null
Runtime: 1360 ms, Java: 332 ms, remote: 1028 ms
Java events: 1360 ms
1360 ms: Application step ZEN (12500)
1360/16 ms: Process request ZMK_TEST_02 (12600)
1032 ms: Process page command SET_TEMPLATE (12608)
1032/50 ms: Initialize item main_template (14100)
0 ms: Create system service TEMPLATE for local [MASTER SYSTEM]
967 ms: Process page command CREATE_DATA_PROVIDER (12608)
0 ms: Create service PAGE_OBJECT
967 ms: Initialize data provider DS_1 (13100)
967 ms: Create system service BICS_QUERY_VIEW for Q93 [PUBLIC]
47 ms: Get connection
78 ms: Create system service MASTER_SYSTEM_SETTINGS for Q93 [PUBLIC]
795 ms: Setup initial state (13001)
795 ms: Create system service BICS_PROVIDER_QUERY_VIEW for Q93 [PUBLIC]
249 ms: RFC BICS_PROV_OPEN (10000)
47 ms: Create system service BICS_DATA_AREA for Q93 [PUBLIC]
499 ms: RFC BICS_PROV_GET_INITIAL_STATE (10000)
499 ms: BICS_PROV_GET_INITIAL_STATE: (-2)
At the top of each “Step” we can already see some useful information:
Step 0: UID: null
Runtime: 1360 ms, Java: 332 ms, remote: 1028 ms
This tells us that this “Step 0” (i.e. the application startup) took 1360 ms. But only 332 ms have been spent in Design Studio’s Java code. The majority of the time (1028 ms) has been spent “remote”. This means that the time was spent by waiting for remote systems of the data sources. That could be BW systems, HANA systems or BIP Universes.
After the top-level runtime information we get a hierarchical view on the execution of a step. Each line starts with the time (in milliseconds) that was used. Some of these numbers are followed by a second number separated by slash:
1032/50 ms: Initialize item main_template (14100)
The second number (here: 50) means that out of the 1032 overall milliseconds there are 50 ms that cannot be assigned to any of the sub-steps of this set. I.e. if you sum up all milliseconds of the sub-steps you will get 982 (1032 – 50) milliseconds. The rest of the time is consumed in the “flow” between these steps. So these times after the “/” should typically by quite low (less than 100). If you find times that are over 100, open a bug ticket to the Design Studio team and tell them that they should add a specific entry in the statistic for such a significant amount of time.
Note that the granularity of times on Windows operating systems is 15.6 milliseconds. So an entry with “0 ms” does not necessarily mean “almost no time” but just “below 15.6”. This also explains why the “below 100” entries often have values like “16 ms”, “31 ms” or “47 ms”.
What should I look for in this data?
In most cases bad performance is caused by accessing data sources. Accessing data sources requires accessing the “remote” system that contains the analytical data. The statistics data varies between the remote-system types (BW, HANA, Universe).
As a first example we take a look at BW data sources. BW systems are accessed via the “Business Intelligence Consumer Services” (BICS). These services access the BW system using “Remote Function Calls” (RFC). RFC calls are typically the items that take a significant amount of time.
In this example one of the long-running calls is “BICS_PROV_OPEN”. By going up the hierarchy we can see that this was triggered by “Initialize data provider DS_1”:
842 ms: Initialize data provider DS_1 (13100)
842/1 ms: Create system service BICS_QUERY_VIEW for Q93 [PUBLIC]
36 ms: Get connection
56/55 ms: Create system service MASTER_SYSTEM_SETTINGS for Q93 [PUBLIC]
1 ms: Create system service SYSTEM_CAPABILITIES for Q93 [PUBLIC]
699 ms: Setup initial state (13001)
699/3 ms: Create system service BICS_PROVIDER_QUERY_VIEW for Q93 [PUBLIC]
550 ms: RFC BICS_PROV_OPEN (10000)
If you see several of these “Initialize” entries in the statistics you should check if you really need them within this step. Especially in the startup step there are often too many data sources initialized, but in fact they only needed in later steps (or not at all if the user does not click thru all tabs). You should check the blog mentioned at the beginning to find ways how to change this.
Another important call is “BICS_PROV_GET_RESULT_SET”.
247/5 ms, 80: Get result set (13004)
240/3 ms, 80: Get provider result set (13040)
237 ms: RFC BICS_PROV_GET_RESULT_SET (10000)
The “Get result set” entry indicates that the complete result of a data source is fetched from the remote system. If you see these entries you should check whether the number of retrieved result sets matches the UI that is displayed. E.g. if a user interaction brings up a crosstab and a chart, there should at most be two “Get result set” entries. Even better there should be only one because a crosstab and a chart can share one date source in most cases.
Result sets are retrieved when they are needed for the first time:
- A data bound component (crosstab, chart, dimension filter, filter panel) renders information contained in the data source
- A script call (e.g. DS_1.getDataAsString) retrieves data from the data source
After that the result set is re-used as long as the data source state does not change. Scripting calls that trigger “Get result set” to be called again are for example:
If the number of “Get result set” entries is higher than you would expect, then you should check your script code for calls that change the data source state.
On the other data source platforms (HANA, Universes) the “BICS_” RFC calls won’t be there. There will be other platform specific entries. E.g. SQL calls for HANA:
15 ms: Get result set (13004)
15 ms, 3: Get result set (13004)
15 ms, 3: Get provider result set (13040)
15 ms: Execute SQL: GET_CROSSTAB_DATA
This means that you should look for the “generic” texts like “Get result set” or “Initialize data provider“ if you are analyzing data sources to Non-BW data sources. If you are a BW junkie (like me) and you have used BW performance analysis tools like “RSTT” (RS Trace Tool) you might like “BICS” calls more because you have seen them very often (probably too often) in RSTT 😉
I will post some more “Design Studio Tips and Tricks” in the future.
Stay tuned and please share your comments and feedback.
Great blogpost, looking forward for more performance related articles!
Great post, I'll definitely will be able to put this to good use!
Great post indeed! I have a few questions however which you might be able to help me with. First of all I'm getting a different statistics UI when adding &PROFILING=X in the URL (when executing on the BIP). It's not giving me an option to download any data as CSV or Text. My second question is where I can find more information regarding the data displayed. Is there any guide available that explains for instance the codes used?
When I did a quick analysis I found out that 1 of the data sources takes about 6400 ms to initialize while 6300 ms are 'lost'. All other data sources initialize in about 200 ms. Can you give me a hint where to look for. The HANA view I'm accessing is quite small and returns only 1 row of data when executed. In the SAP HANA Studio I get the results back in 200 ms.
Thanks for your reply!
With kind regards,
Martijn van Foeken | Intenzz
I second that post. I'm on 1.2 and the detailed breakdown is not displayed (barely any rendering info, no BICS, no BW backend) and CSV download is missing. I contacted on of DS product owners in Dublin AppHaus and he informed me that these features are not available in DS 1.2. I know for sure they're not available in my system. Can you confirm this is a planned feature not yet available in released product?
some of the features were not visible in Design Studio 1.2 SP0 due to a bug. In version 1.2 SP1 everything should work as described. Note that both the Design Studio tool as well as the Design Studio AddOn for BIP must be on version 1.2 SP1.
The information that is available when running the application with "Execute on BI Platform" might be less informative due to security settings on the server. In that case execution via "Execute Locally" might be more useful.
sorry for the late reply.
The difference of the UI when executing on BIP might have two reasons:
1) The missing “download” buttons are most likely caused by the fact that the deployed version of the Design Studio AddOn for BIP is version 1.2 SP0 or lower. The buttons will appear as of version 1.2 SP1.
2) The reduced information in the statistics is probably caused by the server security mechanisms that prevent some kind of information from to be “leaking” outside the server.
Unfortunately the information about the codes is currently not documented. I have no information if it is planned to provide this information in public. The intention of these codes is to help the SAP support if these performance figures are provides to SAP.
Some more information on what is going on is available via the logging infrastructure. In the Design Studio select “Tools” -> “Preferences”. On the left side select “Application Design -> Log Level” and choose “Debugging Information”:
Then run the application and open the error log view by choosing the menu “View -> Error Log”. In the error log view press the button “Open Log”:
In that log file you can search for entries containing “[BICS HANA]”. These entries contain specific information about calls made from Design Studio to HANA. In most cases these entries contain the elapsed time and the specific SQL call made:
Maybe that log information helps you either directly or it might help the SAP support if you attach it to a SAP support ticket.
Thanks for your reply! I'm indeed not running on 1.2 SP1 but on SP0. Currently not able to upgrade in the project as we are pretty much in go-live fase. Will definetly try out once I have the opportunity. Hope something will be documented!
With kind regards,
Martijn van Foeken | Intenzz
Great post. very useful.
I'm followingyour steps, and the profiling popup appears, but disappears when the DS is fully loaded. Where can I retrieve the info?
Thanks for this great post !!
Highly appreciate if you can bring some clarity on thoughts i have.
1. In the below screenshot - Step 0 takes 3431ms for running the Java code within the application, whereas Step 1,2,3,4,...... run-time has significant cut-down. Why is it so ?
I'm assuming Step 0 - is where it actually run the java code to initialize and set the components, for the application to built the framework ( includes code in "on startup" and DS load with counter for background processing, basically entire script.
2. I run the same application "local mode" with no changes to my file. the results do not match with the above screenshot.
Not sure what made this change, does ping from local host or server could possibly make this difference? just a thought 😕
Once again thanks for the post
the „Step 0“ is usually the initialization phase including the „On Startup“ script. If the application that you started includes a prompt (for entering variable values of queries) then there are several steps before that show the user interaction in the prompt screen.
If you have background processing involved each of the background processing steps (i.e. for each counter value) gets its own entry in the statistics dialog.
Different times in execution are normal. They might come from different load on the computer (server or client), network latency, different states of server side caches (especially BW caches). For further analysis you might want to take a look at what happens inside these steps be expanding the tree nodes on the screen.
Different times between local execution and server side execution might be caused by the different network topology in these environments. Assuming that you use the “BI Platform” server accessing data on a BW system, the network is like this:
“Local execution”: The web browser connects to a locally running web server in the design studio tool on the local computer. This local web server connects to the BW system to retrieve the data.
“Execute on BI Platform”: The web browser connects to the web server running on the BI Platform server. The BI Platform connects to the BW system to retrieve the data.
Depending on the network routes taken and on the performance of the BI platform (which is not involved in “Local execution”), this may cause different times.
For getting reliable numbers performance tests should always be executed several times. Then averages can be taken and outlier can be eliminated.
Thanks a lot Martin for this very intersting stuff!
It works well for some of my simple dashboards but does not for more complex ones (I don't get the statistics window at all), do you have any idea why ?
Thanks a lot in advance
it is a known (but not yet fixed) issue that the profiling screen disappears occasionally.
The good thing is, that there is always a workaround. The dialog never disappears completely. It is always remaining in the browser’s DOM and can be made visible again.
What helps in most cases is this approach:
Open the developer tools of your browser and search for the DOM element named “ZEN_PROFILING”. Then take a look at the DIV element, where the “ZEN_PROFILING” is contained. You might see that the parent-DIV has the position “-1000px, -1000px”. Use dev tools of your browser to set these to “1px, 1px” – then the profiling dialog will re-appear.
Do you have an example application where I could test the disappearing profiling dialog without the need for connecting to your data-sources? If so, I would appreciate if you could sent me this application via mail, so that I can test if this problem is completely fixed in the next version of Design Studio.
Very useful post. Helped me alot.
I was also wondering on the syntax "Execute SQL".
Are we able to see whats the SQL executed? I'm having some performance issues with it for Execute SQL: GET_MEMBERS.
Sometimes is very quick 100+ms and sometimes it could go up to 6000+ms. I'm connecting to HANA and I'm sure the HANA views doesn't takes more than 3 sec to return a result. Is the profiling effected by network latency? The result I got was totally random. Besides GET MEMBERS, it also happens to GET_CROSSTAB_DATA.
Also, these fetch happens after the .setFilter/setFilterExt.
to see the SQL statements you can set the logging level to “Debugging Information”:
Afterwards, if you filter the log entries on “SQL:” you can see the emitted SQL statements
Thanks for this helpful post!
I was wandering if it is also possible to trace this using the CMC?
Idea is to let a user execute the Design Studio dashboard and check in the trace later on where time was spend for this user.
Do you know if this is possible?
for measuring E2E runtime of an application on the BI Platform you can use E2E trace tool. Here you can find more info:
my version of DS is 1.6.2 and the breakdowns are now Total Time, Design Studio Time and data source time. Does the data source time include BW and BI platform?
The "data source times" comprise the time spent in the "data-source-server" (BW, HANA), including the network time between BI Platform and the BW/HANA. This means large "data source times" indicate improvement potential at BW/HANA side (Queries, Cubes, Views) and sometimes (but very rarely) also network issues between BI Platform and the data source servers.
I noticed that if you use a stopwatch and actually time how long it takes to load initially, there is additional time which is not incorporated into the tracker. Any reason for the discrepancy?
there is actually a difference in the times provided by „PROFILING=X“ and stop-watch time. The profiling trace contains only times spent in the BIP server executing the Design Studio application on server side. Some aspects that happen right before and after the server side application execution are not part of this profiling information. Such aspects are for example:
There are two main reasons why these times are not included. The first reason is that the times in “PROFILING=X” shall help the creator of the application to improve application runtime by changing the application. However if you have network issues, complex authorization setup in you BIP or simply a slow machine where you browser is running then it won’t help if you restructure the application.
This leads to the second reason… If you want to tackle these times also, there are many powerful and mature tools already on the market that can help you with that. The Design Studio profiling is meant to complement these other tools, because they have no way of looking into the details of a Design Studio application.
Complementing tools in this area are for example:
Thanks for this helpful post!
I have a performance problem in my cockpit, loading is slow, I would like to know what is the whole
loading process of my cockpit from the bottom to the presentation level and what is the request
sending process between the server, and which one do you think takes the most time?
Thank you Martin!