Skip to Content

Design Studio 1.5 – The Performance Release

A particular focus of Design Studio 1.5 development was to improve performance.

This document describes many aspects of performance improvements, which are divided into:

  • The Free Lunch
    New performance improvements, which you get automatically when upgrading to Design Studio 1.5
  • The Exciting New Features
    New performance-improving capabilities, which you can profit from when creating new and modifying existing applications
  • Best Practices
    New and updated guidelines, which you should apply to your new and existing applications

  • Getting It Even Faster
    New tools, which you can use to analyze performance problems and make your application faster

Hint: You will find sections starting with “Before Design Studio 1.5“, which will point out how things behaved up to Design Studio version 1.4. The new stuff will be introduced using “With Design Studio 1.5

The Free Lunch

Improved Browser Caching Strategy

Before Design Studio 1.5 browser caches were tied to a specific server node. Thus, in system landscapes with multiple nodes the browser cache had to be built up several times – one time for each node. With Design Studio 1.5 the caching strategy has been improved to support a single cache, which is shared among several server nodes. This results in an improved startup time of the application, because the browser cache filled by the first node is reused when other nodes are accessed.

Before Design Studio 1.5 browser caches were invalidated when the server nodes were restarted, because during server downtime, a server update might have been installed. With Design Studio 1.5 the system can distinguish between a simple server restart and a server update. The system invalidates the browser cache only if a server update has been applied. This results in an improved startup time of the application, because the browser cache does not to be refilled after a simple server restart.

Reduced HTTP Requests

Before Design Studio 1.5 the associated JavaScript module for each component type used in the application was loaded in a separate HTTP request. With Design Studio 1.5 the most relevant JavaScript modules of the components were combined into a single module.

Before Design Studio 1.5 the authentication method at application startup required multiple HTTP requests. With Design Studio 1.5 the application startup sequence has been redesigned and the number of HTTP requests has been significantly reduced.

Both improvements result in an improved startup time of the application, especially in high-latency scenarios like WAN, because of the reduced number of HTTP requests.

Improved Startup Time of Application When Executed on Server

When an application is started on the server (that is, not with “Execute Locally”) the startup time is significantly influenced by the number of Java archives (JAR files) on the server, which need to be searched during application startup. The startup time is the longer the more JAR files are present. With Design Studio 1.5 the lookup strategy has been improved. This results in a faster application startup.

The Exciting New Features

Parallel Query Execution

Before Design Studio 1.5 the data sources of an application were executed in sequence. With Design Studio 1.5 the application developer can decide which data sources are executed in parallel. Note that executing queries in parallel comes at a price. Executing data sources in parallel requires multiple sessions. Each session consumes resources on the server that stores the actual data, for example the BW system. This is the reason why queries are not executed in parallel by default.

With Design Studio 1.5 the application developer can define groups of data sources (“processing groups”). Each of these groups can be executed in parallel. Each group is associated with a session. Note that variables of separate sessions cannot be merged. If the application needs both parallel query execution and variable merging then there are new Design Studio script methods that can emulate variable merging behavior.

For example, before Design Studio 1.5 an application of 5 data sources, with each data source taking 1 second to initialize, took about 5 seconds for data source initialization. With Design Studio 1.5, by placing each data source into a separate group, one would expect the application to take only about 1 second for data source initialization, but of course you need to add some overhead for managing separate sessions and handling and synchronizing the parallel execution to arrive at the actual figure.

The most significant performance improvement of parallel query execution is during application startup. In addition, the performance of the application is improved whenever the result sets of data sources are retrieved, for example during rendering, after applying filters, after changing the drill-down, and so on.

Unmerge Variables / Unmerge Prompts

Before Design Studio 1.5 variables of data sources are merged in a “variable container“.

This has the following advantages:

  • Setting a value of a variable that is shared among data sources, can be done with a single Design Studio script method call instead of setting the variable value for each data source separately.
  • The shared variable appears only once in the Prompts dialog instead of several times.

However, these advantages come at a price in performance:

  • Setting a variable value invalidates all data sources associated with the variable container. It even invalidates data sources that do not contain the variable that was set.
  • Initializing a data source with variables during the flow of application use invalidates all data sources associated with the variable container.

When data is retrieved from an invalidated data source, for example during rendering or during a Design Studio script method call, then the data needs to be reloaded from the backend. This obviously reduces the performance of the application – the more the larger the number of data sources with variables.

With Design Studio 1.5 the application developer can disable the above variable merge behavior with the application property “Merge Prompts”. Reasons to do so could be that the application developer wants to deliberately set different variable values for variables of the same name but of a different data source or for performance reasons, because setting a variable value in one data source or initializing a data source does not affect (invalidate) other data sources.

Best Practices

Using „setVariableValue“ or „setVariableValueExt“

When setting several variable values with „setVariableValue“ or „setVariableValueExt“ Design Studio script methods, write these commands in one direct sequence, one after the other, without any other Design Studio script methods in between. This sequence is folded into a single backend call to submit variable values instead of multiple ones, improving application performance.

Using „setVariableValue“ or „setVariableValueExt“ at Application Startup

When setting variable values at application startup, prefer to set the variable values in the “On Variable Initialization” script instead of the “On Startup” script.

During the “On Startup” script the variable values have been already initialized using their default values or with values entered in the Prompts dialog. Setting a new variable value at that time will invalidate the associated data source, resulting in reloading the data. Setting the values in the “On Variable Initialization” script, which is executed before the “On Startup” script, avoids this issue.

Do Fewer Data Sources Always Improve Performance?

In general, it is true that the more data sources are initialized at startup the less performant the application is at startup. Note, however, that data sources that are intentionally not initialized at startup (with the data source property “Load in Script = true”) do not decrease performance. For example, 10 data sources with “Load in Script = true” will not decrease the startup performance of the application. Even removing them does not improve performance.

Performance of Universe Access

While BW and HANA data sources provide direct multi-dimensional data access, Universe data sources are organized relationally. When accessing a Universe in a multi-dimensional manner, there is an overhead to transform the relational data into a multi-dimensional form. The overhead is the larger the more data is processed. In general, to achieve best performance, it is recommended to load frequently accessed relational data into a BW or HANA system.

Background Processing Is Not For Free

Keep in mind that background processing is not a replacement for parallel query execution. Background processing is intended to improve the perceived performance at the price of a slightly decreasing overall performance.

Every background processing step adds a performance overhead, so aim for a minimum number of background processing steps. For example, with an application of 10 data sources do not use 10 separate background processing steps – each one initializing a single data source. Better pick and group, for example, the 3 most important data sources in one background processing step and the 7 remaining ones in another background processing step.

In particular, do not consider using background processing if the application has no performance problem at all. For example, if you use background processing with multiple data sources just for the effect of loading animations, disappearing one by one, the price you pay for the effect in performance is not worth it. In such cases, consider dropping background processing altogether resulting in your data being loaded more quickly.

Getting It Even Faster

Crosstab and Pixel-Based Scrolling

By default the Crosstab component scrolls only in units of entire cells (like in Microsoft Excel). Additionally, to achieve best performance, only the data for the displayed cells are sent to the browser. However, you can activate “pixel-based scrolling” at a Crosstab component, mainly aimed at touch devices, which offers a more fine grained, pixel-wise scrolling. To achieve this kind of scrolling, however, it is necessary to send the data of all available cells to the browser. Obviously this will work for a limited number of cells only.

With the growing number of cells this feature decreases performance in several ways:

  • The number of cells that need to be processed at the server increases.
  • The amount of data that needs to be sent over the network to the browser increases.
  • The amount of data that needs to be processed in the JavaScript engine of the browser increases. This is especially relevant for Web browsers of mobile devices which have lower hardware capabilities than desktop computers.
  • The number of cells in the browser’s Document Object Model (DOM) increases memory consumption and decreases rendering performance of the browser. This is especially relevant for Web browsers of mobile devices which have less memory than desktop computers.

In case of performance problems, it is recommended to check the following guidelines:

  • The number of data cells (“Row Limit” x “Column Limit”) for Crosstabs using “Pixel-Based Scrolling” on mobile devices (for example iPad) should not exceed 500.
  • The number of data cells (“Row Limit” x “Column Limit”) for Crosstabs using “Pixel-Based Scrolling” on desktop computers should not exceed 5000.

Improvements in the Profiling Dialog

Better Code Coverage, More Detailed Messages

With Design Studio 1.5 code is better covered and more detailed messages were added to the output of the Profiling dialog. For example, during Design Studio script execution, the script name is listed (“BUTTON_1.OnClick()”)  in the Profiling dialog.

Higher Measurement Accuracy

With Design Studio 1.5 the measurement accuracy has been increased – on the Windows platform, for example, from 16 milliseconds to 1 millisecond.

Remote Times for HANA Systems

With Design Studio 1.5 remote times for HANA systems are explicitly listed in the Profiling dialog. Before Design Studio 1.5 only the times for BW systems were listed.


New Tab “General Information”


With Design Studio 1.5 the tab “General Information” was added to the Profiling dialog. It provides the following information:

  • Timestamp of application execution
  • Name and description of application
  • Details about the data sources of application. For each data source the data source alias, the name of the object (for example the query name in BW systems or the view name in HANA systems) backing the data source, the processing group (when parallel query execution is used), the connection type, and the initialization state are listed.

Streamlined Content of “Download as Text”

With Design Studio 1.5 the content of “Download as Text” has been streamlined by omitting arcane information and all entries with an execution time of 1 millisecond and less. Before Design Studio 1.5 those entries cluttered up the content of “Download as Text” making it hard to spot relevant items with long execution times.

Display of Processing Group Execution


With Design Studio 1.5, the profiling dialog displays the execution steps for each processing group separately with applications that use parallel query execution. Whenever a parallel execution starts, “Execute Processing Groups asynchronously” is displayed in the Profiling dialog, followed by separate lines showing the execution of each processing group.

The separation into processing groups is also reflected in the downloaded content of the Profiling dialog.

What’s Next?

As you have seen from this long list of topics this Design Studio release truly was about performance.

As soon as Design Studio hits the market (end of Q2/2015), you will have lots of new possibilities for improved application performance.

By the way, expect more posts on the heavy-weight Design Studio 1.5 performance topics “Parallel query execution” and “Unmerge variables”, including details, tutorials, and more.

Have fun with Design Studio 1.5!

Further Reading

Design Studio Performance Best-Practices

Design Studio: Performance Implications of Variables

Design Studio Tips and Tricks: Measuring Performance

Design Studio 1.5: View on Parallel Data Source Execution

You must be Logged on to comment or reply to a post.
    • The parallel query exection works for all types of data sources (BW, HANA, Universes), however it is currently (in 1.5) limited to Design Studio applications running on BIP.

      • Is there any news on this topic ? Will it be available on 1.6 or with 1.5 SP1 also for NW or is the strategy to go native with HANA or the need of using the BIP?

        • Hi Marc-Philipp,

          it will definitely not be in 1.5 SP1.

          The decision process for 1.6 features are not yet finalized, so I cannot make a final statement now. I'll keep this track updated when there are more news on this.



  • Thanks for the great input!

    Looking forward to see some performance-increases next week


    There's a little mistake I guess:

    Note, however, that data sources that are intentionally not initialized at startup (with the data source property "Load in Script = false") do not decrease performance

    This should be "Load in Script = true", or?

  • Hi martin,


    you mention the application property "merge prompts"  Is it possible to keep some merged for examples the ones who ought to be consistent and unmerge others?


    best regards,



    • Hi Jeroen,




      the setting "Merge Prompts" applies to all data sources in the application.


      To make some of them behave merged, you can use scripting methods to apply variable values from one data source to the other:




      DS_2.setVariableValueExt("MY_VAR", DS_1.getVariableValueExt("MY_VAR"));






  • Hi Martin - This is exciting as we are having to strip down our DS applications to optimize end user experience. (ie. move charting and calculated views out to their own application from more of a crosstab app)


    If BW server sessions is a minimal concern is there a maximum threshold to the number of queries you can run on startup with query parallelization? Is there any recommended practices to right size the application (meaning - quantity of queries on startup vs background processing vs on script).


    I ask as I have an Application with 75 views (x5 Application Varieties of these views)




    • Hi Jim,


      there are  no general numbers that can be given. There might be very fast data sources where 10 executed in sequence are OK, on the other hand there might be only three long running data sources where running them in parallel (or background) is a must to achieve acceptable performance for the end user.


      Unfortunately not even the number of returned cells is a good criterion for performance decisions. Because there might be data sources that return only a few number of cells (or even only one number), but to get these a large number of data records must be processed.


      This means in the end performance and sizing of applications must always be an individual decision based in the kind of data sources used in the application.


      Coming to your specific example of an application with 75 data sources. From my experience this far too much. Not primarily because of performance, but also because of handling this large number in one single application. In our experience 10 – 20 data sources per application is a reasonable amount. We recommend splitting up larger apps into several smaller ones.




      • Hi Martin - Thanks for your time.


        We agree with your assessment - the quantity of views was due because on the fly filtering was not ideal with our performance marks - so essentially we pre-filtered these views in BW queries - otherwise could bring the quantity way down.


        Possibly in 1.5 we can look to change the strategy.


        • Hi Jim,


          I'd be interested to know how you are currently managing the 75 data sources in your application.  Have you actually created 75 separate data sources in the application itself, or do you use the assignDataSource method to swap data sources in and out as needed?





          • Hi Mustafa,

            Yes we bring in all and assign them to component - using laying (visible vs hidden). We didn't see much performance lag with this approach. Where we see the issues is running calculation scripts (to prepare data for charting) and Background processing ties up the application from running other scripts (ie. buttons, Maximize an Area, Category, etc) until this startup is complete. We have basically now stripped down application to just crosstab views and any other items we moved to their own application.


            I am really hoping query parallelization will help - and give us the flexibility. At any given time we are only displaying 4 data sources - but would like to run about 10 so navigation to other sources is fast.....




          • Hi Jim,


            Thanks for the explanation.  Your application must be quite sophisticated if it uses 75 data sources .  I recently did some brief experimentation with the assignDataSource method for BEx data sources where I defined just one data source alias (DS_1) and then swapped the underlying data source with assignDataSource.  I found that in many cases, the different data sources were cached so that swapping the same ones in and out did not have a negative impact but in fact loaded very quickly, which meant that I could reduce the number of data source aliases while still being able use more underlying data sources.





      • Hello Martin,


        could you provide a little more details on how that parallel loading will behave in the BW scenario ?


        Lets assume we have a simple dashboard with 3 data sources and all of them are 3 Bex queries. All 3 Bex queries are using 3 different cubes.


        Now lets also assume that we have 50 users that will run this dashboard in the morning.


        When I understood the parallel loading correct then I can assign one or more data sources to a group and the groups are loaded in parallel.


        So in the given example when I would assign each query to its own group (meaning 3 groups in total) I would then create 150 dialog sessions on the BW server - correct ?


        Also - how does the parallel loading behave in combination with the OLAP Cache in BW ?


        Is each of the sessions going to run the BEx query from scratch ?



        Ingo Hilgefort, Visual BI

        • Hi Ingo,


          in your example in a “normal” application with 3 data sources and 50 users running them you will have 50 sessions. Putting these queries in 3 separate groups to run them in parallel will result in 150 sessions instead of 50 sessions. It simple math



          The parallel execution and the OLAP caches do not interfere with each other. As you might know there are two types of OLAP caches: The local cache and the global cache.


          The local cache is a cache that exists per data source. Or in terms of BW: A local cache exists per Query-INSTANCE. It’s primary goal is to provide fast access if a filter, variable or drill-down is changed. This cache it not shared between data sources (Query-INSTANCES), so it makes no difference if the data source is in a separate group or not.


          The global cache is shared system wide (on the BW system) and also between different users. Also in this case it makes no difference if the data source is in a group or not, because this cache is shared system-wide anyway.


          Hope that helps.




          • Hello Martin,


            just to follow up on this:


            Are you saying that the global cache is still being used with the parallel query execution ?


            so lets take a concrete example:


            I have Dashboard_1 with QUERY_1, Query_2, Query_3 based on InfoCube_1, InfoCube_2, InfoCube_3. Queries are set in parallel.

            10 users are executing the dashboard

            OLAP Cache should be filled after this.


            I have Dashboard 2 with QUERY_1, Query_2, Query_3 based on InfoCube_1, InfoCube_2, InfoCube_3.

            Queries are configured in 3 groups for parallel execution.

            20 users would run the dashboard.


            Are these 20 users now running the queries from scratch or are these 20 users hitting the OLAP Cache ?




            Just asking as I am hearing different informations.



            Ingo Hilgefort, Visual BI

          • Hi Ingo,


            actually I assumed my previous reply was simple and concise enough.


            But let me apply it to your sample:


            When the 10 users run the reports, then the global cache will be filled. What exactly is stored in the global cache depends on the settings in the BW system. But of course the global OLAP cache is active during that time.


            After that when the 20 users start the Dashboard2 they benefit from the content in the global OLAP cache that was put there from Dashboard1.




  • Hi Martin,


    Good to learn that the performance features of Design Studio continue to be improved.  This blog certainly serves as a very good reference for performance optimisation techniques and recommended approaches.


    Regarding your comment related to Universe Access: "In general, to achieve best performance, it is recommended to load frequently accessed relational data into a BW or HANA system."; in this context, what is the recommendation for the "classic" BusinessObjects customer who does not have SAP or BW and doesn't necessarily have a compelling need to implement HANA?  Such customers may maintain a number of relational databases on various database systems, all being consumed through universes to take advantage of a common semantic layer.  Are we saying that Design Studio is not an appropriate tool for such customers?  If so, what is the alternative?





    • Hi Mustafa,


      this recommendation stems from the fact that Design Studio on BIP is an integration platform to create reports from all kinds of data sources. We have lots of customers that create Design Studio applications containing for example data sources from both Universes and BW queries. Such customers recognize that data sources based on Universes are not as fast as data sources on BW, HANA or BW-on-HANA. This is not caused by Design Studio as a client tool but simply by the fact that these systems have lots of more technical means (caches, pre-calculations, in-memory optimizations, and many more) to achieve outstanding performance.


      Customers however, that have been using Universes successfully in the past (and are happy with the performance of Universes in general) should of course use Design Studio to create their reports.




      • For us, it's organizationally quicker to put a UNX on top of an existing RDBMS prior to loading into BW (when the system of record isn't SAP.) - This isn't an SAP problem, just a lot of TMS overhead in our case.


        That being said, UNX data sources are OK until we get it into BW if needed.  Or HANA one day.

  • Hi Martin,


    I had also heard that with Design Studio 1.5, the row restrictions would be extended to around 20,000 rows. Which patch or feature pack would it be?






  • Hi,


    The described enhancements look really good. I have a question however on crosstabs:

    Will we have data selection possibilities (same as what we have for charts today)? If yes in which version? It would really help with runtime as well, as right now im forced to use separate queries if i want to have different set of measures displayed


    Thanks a lot!

  • Hi Martin,


    you said that parallel processing has a price . On my point of view you spoke about server ressources.


    But is there an impact in term of financial cost , I mean in case of a CSBL (concurrent session based licence) licence mode? If I increase the number of Design studio dashboards using this parallel processing, do I need to increase accordingly the number of licences for concurrent sessions?




    • Hi Xavier,


      the parallel processing has no impact in the CSBL licensing of the BI platform. The parallel execution threads run within a single session on the BIP.




  • /
  • Hello Martin,


    Now with parallel query processing, we notice more and more that the performance of our Design Studio applications is defined no longer by the remote waiting time, but more and more by the java processing time. E.g. we have an application where the total runtime is 10 seconds, of which 7 seconds is Java.



    Now my question is, would the java processing time decrease if we upgrade our server hardware (mainly CPU)? Or is this java processing time influenced by the end user's desktop & browser performance? Or is it more the memory capacity?


    We have already sized and finetuned the server containing the analysis application server but cannot seem to bring this java part further down.

    • Hi Toon,


      in general, it would help to upgrade hardware - but let us check first what kind of events are taking the time.


      can you upload or send me on private chat the "Statistics Download" as Text AND CSV from Profiling Dialog?



  • Hi Martin,

    we applied all tips and tricks to our DS application but still have an initialisation time of about 7 seconds.

    We have one command in the startup script that takes about 500ms. Its DS_1.reloadData().

    We have to reload DS_1 because otherwise the next code line in the startup script, which is "G_Verbund=DS_29.getVariableValueExt("CO2_VERBUND_EW_CE1");" would not work. DS_1 is defined as "Load in script:false" and I thought that the datasource DS_1 is already initialised and available in startup script without reloading it.

    At whích point of time are datasources, that are defined with "Load in script: false", really initialised?

    Thanks for your support.

    Regards, Andreas

    • Hi Andreas,


      are you running Tomcat on BI Platform, possibly with AD SSO?

      We did this and lost about 5 seconds during authentication process.

      We discovered, that the BI Platform is doing a reverse DNS lookup to log the hostname of the calling client in the audit database. However, this did not work correctly due to the setup of our DNS infrastructure (I'm not the expert in this area) what caused a wait time till timeout of 5 seconds.


      By adding this line to the Tomcat Java Startup Options, we saved 5 seconds per each initial report call.



      Best regards


      • Hi Stefan,

        I learned that some exit variables need the result set of the query. So even if the query is set to "Load in script.false" the data is not available. That's why we have to reload the query. For the rest of the high time the number of datasource (7) are responsible. Unfortunately we need these for the initial screen. We report on BIP and not NW.


        Regards, Andreas