Skip to Content

Help! My SELECT is slow!

Now that the first BW systems were migrated to SAP HANA, I believe there will be some demand for understanding what is happening inside the database. I am an Oracle DBA. On Oracle, there were these dreaded “db_file_sequential_read” events slowing down your queries, so preventing physical reads was mostly the way to go. With HANA, things look quite different. All we have is CPU time and memory consumption. (OK, for BW data loads there is some writing to the log volumes as well.) When I look at InfoCube 0TCT_C01 I only get a summary value for TIMEDM. In InfoCube 0TCT_C03 I get a little bit more details hinting to BW topics, but that is not what I was used e.g. with Oracle tracing or Active Workload Repository. So what could I do in HANA?

Introducing HANA Perftrace

When studying the available HANA documentation from SAP, it seems obvious that “perftrace” is the tool for performance analysis. You can find the performance trace feature In the “SAP HANA Database – SQL Reference Manual” on pages 122-124:

ALTER SYSTEM START PERFTRACE [<user_name>] [<application_user_name>] [PLAN_EXECUTION] [FUNCTION_PROFILER] [DURATION <duration_seconds>]


SAP even tells you that these performance traces can be processed with HDBAdmin. So let’s start! To keep things simple, I want to start with some kind of “Hello world!” example. Initially I just want to make sure I can perform the performance trace and analyze one of the simplest conceivable SQL statements. How about simply counting the number of users in the SAP system:


Tracing vs. Documentation

Since I don’t know perftrace yet I simply start it with all options (plan_execution and function_profiler). I would like to limit the tracing to my simple SQL statement, therefore I set the user_name and application_user_name. But wait! What is that?

hdbsql HDB=> alter system start perftrace saphdb hp plan_execution function_profiler duration 5

* 257: sql syntax error: incorrect syntax near “saphdb”: line 1 col 30 (at pos 30) SQLSTATE: HY000

Did SAP forget to implement that feature? Or do I need single quotes/double quotes for the username? After some trial and error I give up and start tracing without user limitation:

hdbsql HDB=> alter system start perftrace plan_execution function_profiler duration 5

0 rows affected (17.696 msec)

Fine, tracing is activated and I have to hurry to start my SQL command. I use the DBA cockpit to execute my SELECT:


I click on button “Execute” et voila:


This is only a small demo box, so only 9 users were found in the table. The database interface measured the execution time as 3786 us. In the HANA trace directory some *.prf files have been written:

usssapl1:(/usr/sap/HDB/HDB40/usssapl1/trace)(root)#ll *.prf

-rw-r–r– 1 hdbadm sapsys 59036 Jul 19 11:06 indexserver.18357.000.prf

-rw-r–r– 1 hdbadm sapsys  3481 Jul 19 11:06 nameserver.15660.000.prf

-rw-r–r– 1 hdbadm sapsys   221 Jul 19 11:06 preprocessor.18018.000.prf

-rw-r–r– 1 hdbadm sapsys  6283 Jul 19 11:06 statisticsserver.18450.000.prf

-rw-r–r– 1 hdbadm sapsys  6809 Jul 19 11:06 xsengine.18575.000.prf

Finally SAP HANA has to consolidate that information into one perftrace file:

hdbsql HDB=> alter system save perftrace into file ‘perftrace.tpt’

0 rows affected (3008.563 msec)

usssapl1:(/usr/sap/HDB/HDB40/usssapl1/trace)(root)#ll perftrace.tpt

-rw-r–r– 1 hdbadm sapsys 22039 Jul 19 11:13 perftrace.tpt

Analyzing traces via HDBAmin

The next step is to load that file into HDBadmin:

python –sqlUser=SYSTEM –sqlPassword=SYSTEMPASSWORD


With any HDBAdmin you can analyze the trace data of other HANA instances. Therefore initially you will get some summary view showing from which host the performance trace was taken and the timespan of the trace data:


Even though I tried to keep the traced timespan short, my SAP user triggered two SQL commands which are now found in the trace data. With some checking of the details I see that the first one is my SELECT on USR02. As expected, the indexserver executed my SELECT command and its duration was 1.2 ms:


On the next tab “Call Patterns” I find some information which seems to be only useful for SAP support:


Now on tab “Call Pattern” things get really interessting. This is what I have been looking for. The SELECT statement is broken down into processing steps, together with their processing time. And best ofall, there is even a graphical view in a Gantt-Chart type visualizing the processing of my SQL command! Make sure you perform your perftraces with option FUNCTION_PROFILER, because otherwise this page will remain empty!

Please also note the “Export” button, there you can save part of the trace data to CSV-files, which can be easily imported into Excel.


Of course these HANA-internal processing steps are not officially documented, but at least you get some information via the button “Help”. So this is the starting point to get a feeling for how HANA operates and to decide what is normal vs. what is not normal processing.

The other available tabs seem to be not so relevant. “SQL Plans” and “Column Store Plans” only provide summary information on the SQL plans. “Time Line” provides graphs of the overall CPU+memory consumption during the traced time period.


I really like this graphical overview on the “Call Pattern” tab. Similarly to SAP’s root-cause-analysis and end-to-end tracing you can easily identify where most of the time is spent. Also the option of exporting the trace data to Excel is very useful, because you are not limited to HDBAdmin for trace data analysis. Maybe the SAP HANA wiki would be a good place to post some insights about SAP HANA methods? I just wonder why the perftrace command didn’t fully work as documented by SAP, must be some stupid error by me.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Mark Förster Post author

    In my intial version I had an error about ALTER SYSTEM SAVE PERFTRACE, now the clasue is correct and it is no problem specifying a different file name.

  2. Witalij Rudnicki

    Hello Mark. Thank you for sharing your discoveries with the community. I am looking forward to your further tries and findings.

    Here are few quick remarks:

    1. HANA database has several engines for processing relational data: SQL, Calculation, Join and OLAP. It is worth to know to be able to run proper traces and as well to understand the traces output.

    2. Perftrace you used in the example is the tool most suitable for columnar views. For SQL trace you can use HANA Studio -> Diagnosis tab -> Configure Trace button -> Trace Level = ALL

    3. Once you analyze your example it is worth to know the following:

    POP = Plan Operations, using prefixes:

    – JE = Join engine (like in your example)

    – Sql = SQL engine

    – Bw = OLAP Engine (legacy of BWA 🙂

    – CE = Calculation Engine.

    Keep digging and am waiting for next post with your findings!

    1. Mark Förster Post author

      Hello Vitaliy,

      thanks for your feedback! On the “Diagnosis” tab I didn’t find the “Configure Trace” button, but on tab “Trace Configuration” I can enable SQL Trace, that should be the same functionality.

      What about that file sqltrace*.py: Is it just an ASCII file or is there any tool to process that output? Just wondering about that py file extension. You see, I keep on digging…



      1. Witalij Rudnicki

        Well, last time I looked at it was SPS2, so may be slightly different or improved by now, but basically you would open this .py file for view and then scroll trying to read and understand it, manually calculating time differences between timestamps 🙂

  3. Ethan Zhang

    Nice post, thanks for sharing your findings. 

    A couple question I want to ask you.

    1. Where can I download the HDBAdmin to analysis trace log?
    2. did you remote to you Suse server from windows? can I know the connect tool name?
    3. where can I found the database error trace?

    Thanks in advance.

    1. Mark Förster Post author

      Hello Ethan,

      1. is part of the HANA Server package, no need to install it separately.

      2. VNC is a good tool, start a vncserver on Linux and connect to it via vncviewer.

      3. All HANA trace files are located in the HANA trace directory: /usr/sap/SID/HDBnn/<hostname>/trace



    1. Lars Breddemann

      I really wonder what’s going on in our company…

      You as a SAP employee can and should use the _internal_ communities and forums to ask for information about unsupported and unofficial features.

      SCN is a public forum – you do realize that, right?

      Perftrace is not supported for end-users for a variety of good reasons. One of them is that there is PlanViz available which provides most of the relevant information anyhow in a nice GUI.

      If you really want to use the perftrace you need to have access to a core HANA developer and the source code to make proper sense out of it.

      – Lars

      1. Hana Deere

        PlanViz is for one script but if you want to test performance in OLTP scenario for 10 user hitting same objects, what is the way out…

        My OLTP scenario is that we have scripted calculated view which gets hit from front end through oData.

        Now I want to test that in real world scenario there will be thousand of ppl going to hit the same object need to know performance which will include start time, end time, how many records it pulled etc.

        Can you please suggest or guide me regarding same.

        1. Lars Breddemann

          How about using the other tools like SQL plan cache, expensive statements trace and LOAD graph?

          There are meanwhile handy SQL scripts available via an SAP note that make it fairly easy to review important runtime statistics for the complete system and/or single statements.

          You may also use a SAP NetWeaver system with DBACockpit & SolMan to monitor your SAP HANA instance.

          Every one of these tools can be used here.

          The perftrace however doesn’t provide you any information that you could use in order to improve your solution.

          – Lars

          1. Hana Deere

            SQL plan cache, expensive statements trace and LOAD graph doesn’t capture everything, front end team is using JMeter to get performance report is there any application which I can use to capture performance for SAP HANA db at backend…


            WHERE host =’xxx’ AND FILE_NAME =’indexserver_xxx.30003.sqlplan.trc.old’;


            WHERE host =’xxx’ AND FILE_NAME =’indexserver_xxx.30003.profiler.trc.old’;


            WHERE host =’xxx’ AND FILE_NAME =’perftrace.tpt’;

            I tried these script I see data is been capture but cannot put in a report as its not in tabular format or not in any specific format which i can then split it to make some sense out of it.

            1. Lars Breddemann

              The perftrace is a closed binary format. It’s not for end user usage.

              What do you want to measure that you cannot measure with the mentioned tools?

              Your JMETER approach is nice and fine, but as long as you don’t do network roundtrip traces (only possible on JDBC driver level) you don’t get “everything” there either.

              “Everything” really is not a good requirement…

              1. Hana Deere

                Basically I want to get

                1) How many user hit a particular object ( In my case a scripted Calculated view)

                2) Object ( In my case a scripted Calculated view) used

                3) Input parameter of each user

                4) The start time when the calculated view was hit by each user.

                5) The end time when the process was completed for each user.

                6) No of records it pulled in the process.

                No 4,5 is for getting actual time process took for each user inside db for getting there result.

                1. Lars Breddemann

                  Ok, let’s briefly go through this list:

                  1) there is no KPI kept that contains this information. You have to derive it through other measurements, e.g. the SQL Plans for a specific time frame.

                  2) You’ll find that in the expensive statements trace and the SQL plan cache

                  3) These information are kept in the expensive statements trace and in SQL plan cache (as of SP8)

                  4) – 6) please have a look into M_SQL_PLAN_CACHE – SAP HANA SQL and System Views Reference – SAP Library It contains a lot of runtime data, already correctly split into the different shares (preparation, execution, fetches, etc.)




Leave a Reply