Skip to Content

Do you have a Voyager workspace that is taking a long time to load? This article gives instructions for collecting some performance statistics which could identify a bottleneck.

Performing these steps will look at the time a workspace took to load and break out the time spent in the data access layer and executing MDX. It will then show how you can calculate what percentage of time was spent in the various layers potentially pointing to a bottle neck.

This example uses a reasonable sized report based against the sample Adventure Works cube that is available for Microsoft Analysis services.

Setup

This first step is to set up any profiling or logging tool or settings that are needed. The three areas that are going to be measured are:

  • End user response time using the HTTP Watch tool
  • Data access time using Voyager’s OLAP Data Access components logging
  • MDX execution time using OLAP Data Access logging and SQL Server Profiler

Setting up HTTP Watch

The basic version of this tool is available for free from here http://www.httpwatch.com/. After downloading and installing there should be a new button in the browser as indicated below.

Clicking this button should cause the HTTP Watch console to appear at the bottom of the browser. When the test is set up and ready to be run the ‘Record’ button will be pressed to capture the time it took for the report to load from its invocation in the browser to its final rendering. The record button is indicated below.

Setting up the OLAP Data Access Logging

The logging to be set up at the OLAP Data Access layer will capture API call timings to the OLAP Data Access component and the times it took to execute MDX. More detailed information about the OLAP Data Access logging can be found OLAP Data Access Logging: Part 1 – Introduction and basic settings. The registry entries that need to be added to the default entries to get the level of logging for this test are as follows:

[HKEY_LOCAL_MACHINESOFTWAREBusiness ObjectsSuite 12.0MDALogModulesAPIMODULE]
“Verbosity”=dword:0000000a

[HKEY_LOCAL_MACHINESOFTWAREBusiness ObjectsSuite 12.0MDALogModulesAPIMODULEComponents]

[HKEY_LOCAL_MACHINESOFTWAREBusiness ObjectsSuite 12.0MDALogModulesAPIMODULEComponentsINFO]
“Verbosity”=dword:00000000

[HKEY_LOCAL_MACHINESOFTWAREBusiness ObjectsSuite 12.0MDALogModulesODBOMODULE]
“MDX Query Log”=”C:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\Logging\ODBOMDX.log”
“MDX Query Clock”=dword:00000001

The MDAS server will have to be restarted before these settings have an effect. This will create two log files in the following directory:

C:Program FilesBusiness ObjectsBusinessObjects Enterprise 12.0Logging

The files will be called MDA.log and ODBOMDX.log

Setting up SQL Server Profiler

Instructions on how to set up SQL Server Profiler can be found Voyager: Capturing MDX timings using SQL Profiler.

Final preparation

When running performance tests the ideal requirement is that the tests are being run on an isolated system. That means that no other users are using the Voyager or Business Objects Enterprise applications and that no other software is accessing the OLAP server.

It is also useful to make sure any log files only contain information about a particular test run. Therefore make sure the SQL Server Profiler trace is empty, stop the MDAS server, remove any existing log files and then restart the server.

Running the test

When everything is prepared, SQL Server Profiler and HTTP Watch both need to be set to start recording. This is the point at which the test report can be loaded. Once the report has loaded, stop the traces for both HTTP Watch and SQL Server Profiler and note the total time displayed in the HTTP Watch console as indicated below.

Find the location of the log files and save copies to a fresh location so they can be analyzed with only the correct information in them. Finally save the SQL Server Profiler trace file.

Collecting the performance statistics

The time value shown in HTTP Trace indicates the total time the end user had to wait. The next step is to analyze the MDA.log file. This includes a log of the API calls into the data access component.

The MDA.log file 

The first statistic to gather is the time the first call to the data access layer started and the time the last call complete. These should be the first and last lines in the log file. In this example the lines look as follows:

2009-12-18T14:12:00.272 7312 APIMODULE ConnectionManagerImpl EXTRA ConnectionManagerImpl::getDrivers

and

2009-12-18T14:12:13.956 7480 APIMODULE CubeImpl EXTRA CubeImpl::supports

The times are highlighted. For this test there was 13.956 – 0.272 = 13.684 seconds that elapsed between the first and last call to the data access API.

This does not mean that all this time was spent in the data access component or lower as there may have been time spent in higher layers in between API calls. A good measure of how much time was spent in the data access layer can be spent by extracting out all the lines in the log file which measure a time and adding up the times. For example a line in the log file might look as follows:

2009-12-18T14:12:00.642 7312 APIMODULE TIMER DEBUG Call to ConnectionManagerImpl::getDrivers took 0.370 seconds

Using a text editor (for example TextPad) or script extract all the lines from the log file that contain the word ‘took’. In TextPad this can be done by pressing F5 to invoke the search dialog, type the word ‘took’ and then press ‘Mark All’ rather than ‘Find Next’. Under the ‘Edit’ menu you can choose ‘Copy Other’ and then ‘Bookmarked lines’ to extract all the relevant lines.

The next step is to extract all the times out. One way to do this is transform the file to CSV format by replacing all the spaces with a comma and a space, saving the file with a CSV extension and then opening in Excel. With a few exceptions all the times should now be in a single Excel column which can easily be totaled up.

The screen shot below shows an example CSV file loaded in Excel.

A couple of examples of where the times don’t quite line up in the same columns are visible. An easy way to identify all the rows which need attention is to sort the timings column from largest to smallest. This should mean all the lines that need editing should now be at the top. Sorting the API calls this way may also highlight any long running API calls which might warrant further investigation.

The total for this example is shown below.

So there was 3.73 seconds of time spent in the data access component.

The ODBOMDX.log file

The ODBOMDX.log file contains the times for all MDX statement that were executed. The times appear on lines that look like the following:

Query duration: 0.002 seconds

Using a similar technique as above, these lines can be extracted (by extracting all lines that include ‘Query duration’), transformed to a CSV file and totaled in Excel.

The results for this example are shown below.

So there was 1.181 seconds spent executing MDX.

The SQL Server Profiler trace

If the trace has been set up just to include the ‘Query End’ events, the contents of the trace file can be copy and pasted into Excel and the query times again totaled.

The results for this example are shown below.

The results from SQL Server Profiler are in milliseconds so this translates to 0.389 seconds. Given that the results from SQL Server Profiler and the ODBOMDX.log are measuring the same thing (execution of MDX) it is worth pausing briefly to explain the discrepancy between the two times given above. The results from the ODBOMDX.log are longer than the results from SQL Server Profiler. The difference is accounted for by thinking about the places the timings are taken. The timings in the ODBOMDX.log are taken at calls into the ODBO client. The times measured by SQL Server Profiler are taken purely in the Analysis Services server itself. The difference is therefore accounted for by the amount of time it takes for calls to cross the network between the MSAS server and the ODBO client and the amount of time just spent in the ODBO client.

Presenting the results

There should now be enough data to present some results and to narrow down where most of the time is being spent. So for this example the results are as follows (percentages are calculated as percentages of the end user time):

End user time: 17.888 seconds 100%

Data Access first and last calls: 13.684 seconds 76.50%

Data Access total time: 3.73 seconds 20.85%

MDX (client) time: 1.181 seconds 6.6%

MDX (server) time: 0.389 seconds 2.17%

So with nearly 80% of the time not spend in the data access layer this is not a bottle neck in this case. However for more complicated reports that load more data the opposite might hold true.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply