When you plan to migrate to SAP HANA, the amount of custom code in your system will have an impact on total cost, time and the quality of the end result. In this article, we look at the potential impact of that custom code, and discuss how to tune your custom code to perform optimal on SAP HANA. By using the tips and tools mentioned here, you will be able to boost your ROI and smoothen the migration process.


Even though system owners, software manufacturers and consultants do not like to admit to it, custom code is a central part of the business functionality. In fact, in some processes custom code is a requirement in order to make the standard functionality work with the business requirements. With that in mind, we can expect that all SAP installation, small or large, will have some elements of custom code.
When migrating to SAP HANA, we must therefore answer the following questions:
  1. Which parts of my custom code must be changed in order to make the code compile and avoid potential functional issues?
  2. Which parts of my code shall be optimized to achieve the performance expected with SAP HANA?
  3. How can I identify which of my main business processes have the potential to be massively accelerated with SAP HANA?
[Ref. 1: Bresch, Beghardt & co.]  
The answer to the first of these questions will provide insight into potential hurdles in the process of migration, while the answers to the two latter will be essential parts of your ROI estimates.

Part 1 – The code cleansing

In general, all the code that runs on your existing platform will continue to run as before on SAP HANA. That is the case for standard SAP code, as the migration will be based on the requirement of application enhancement pack levels in combination with the compulsory
NetWeaver stack level, We can expect that SAP has replaced potential troublesome code, and even optimised it in several areas.
That leaves you with your custom code. The general rule goes for these parts as well, and you can anticipate that most parts of your custom code still do what you would expect. However, as some of the fundamental characteristics in the underlying database changes when you replace your old database with HANA, the code needs a thorough check.
First of all, we need to find and replace any parts of the code that rely on database specific features. Examples are native SQL statements, and the use of DB hints in Open SQL statements. Take the code from example 1 in consideration.
SELECT connid, cityfrom, cityto
INTO :wa
FROM spfli
WHERE carrid = :c1
Example 1: Native SQL.
This notation uses native SQL. It relies on the database to accept that exact syntax. The example is not very advanced. However, in order to eliminate the possibility of compatibility errors, a level of transparent abstraction should be introduced. This is done with a database independent statement set called Open SQL. By using Open SQL, the programmer makes sure that the code runs on any database chosen for your SAP installation or migration.
Secondly, we need to identify, examine and possibly replace code that relies on implicit sorting done by the database. The reason for this is that the migration to SAP HANA includes a change from row based to column based architecture. Before the migration, your row based database returned your result set in implicit primary key sequence if the SQL didn’t request otherwise. After conversion to the column based database, the implicit sort sequence is no longer returned. We can expect that programmers have based code on the previously existing features of implicit index sorting. When building and testing their code, they will have seen this feature in the debugger or in the final result set, and therefore omitted placing sorting in their own code. After migration to HANA, the code will still compile, but may not provide the correct result to the enduser. Therefore we need to place sorting in the custom code. This can be done by one of the following actions:
  • Adding “order by” in the select statement. This is the preferred choice if indexed fields should determine sort order.
  • Adding a “sort by” on the result set. The statement should follow immediately after the select statement in order to prevent processing duplication.
The third important issue with migration to a column based database is the conversion of pool and cluster tables. Cluster tables and some of the pool tables will be transformed into transparent tables and the relation between them is broken [Ref. 2, SAP Note 1785057]. Here is an example:
Before migration, a delete statement on a table cluster would delete from multiple clustered tables. After migration, the delete will only remove data from the table named in the SQL, not from tables that made up the cluster.
Example 2: Cluster and pool tables.
The code must be altered in order to cope with how cluster and pool tables are transformed into transparent tables. Legacy code should be adjusted with separate SQL calls to all tables that previously were in the cluster or pool.
Rewriting the code in accordance with the recommendations above does not sound so complicated. Finding the code that must be changed, however, may seem like an overwhelming task. No doubt, the amount of custom code will be a factor in estimating the effort. Luckily though, the vast majority of code lines that makes up your applications are made by SAP – and not you. Hence, SAP has had the use of tools to correct their own code. Some of the tools are now made available for the customers to use in their optimising work, and will be of great value in the process of migrating to SAP HANA.
In particular, the ABAP Code Inspector is essential in this work. In this tool you can define variants of which elements you want to analyze. In particular, the following categories will provide good help in identifying problematic code:

  • Critical statements: Find native SQL and DB hints.
  • Use of ADBC Interface: Find native SQL and DB administrative statements.
  • SELECT/OPEN CURSOR without ORDER BY: Finds problematic statements where database tables are read without order or sorting before read, search or delete.
  • Search ABAP Statement Patterns: Lets you search for index specific code.
To support the process of generally lifting the quality of the code, the code inspector is part of the ABAP Test Cockpit (ATC). Here, the code quality manager can schedule periodic runs, add quality gates with priorities, and publish the results back to developers. Even though you have no immediate plans to migrate to HANA, this tool should catch your interest. Putting your programming standards into a benchmarking regime will result in better quality and in the end better running business processes.

Part 2 – Boost you custom code

In part one we made the code run on SAP HANA and we eliminated potential code problems that may occur in the migration process. In this third part, we will look at how we can boost custom code so that you achieve the “HANA effect”.
Now that your custom code will compile and run on SAP HANA, you can already expect better response times on your SQLs without putting in any additional work. This will certainly be the case for database intensive programs where the programmers have followed best practice. In processes where programmers have not been focused towards efficient running code, the switch to SAP HANA will not result in massively reduced runtime.
So what are these golden rules of SQL, and what is their importance in terms of HANA? Bresch, Beghardt & co [ref .1] have made an overview of the most critical concepts:

Golden Rule

Detail / example

HANA relevance

Keep the result sets small.
  • Do not retrieve rows from the database and discard them on the application server using CHECK or EXIT, e.g. in SELECT loops.
  • Make the WHERE clause as specific as possible.

This rule is as important as before when migrating to HANA.
Minimise the amount of transferred data.

  • Use SELECT with a field list instead of SELECT * in order to transfer just the columns you really need.
  • Use aggregate functions (COUNT, MIN, MAX, SUM, AVG) instead of transferring all the rows to the application server.

When shifting to a column based database, this becomes more important. Reason being that the whole columns must be read by the database in order to fetch the returned result set.
Minimise the number of data transfers.
  • Use JOINs and or sub-queries instead of nested SELECT loops.
  • Use SELECT.. FOR ALL ENTRIES instead of lots of SELECTs or SELECT SINGLEs.
  • Use array variants of INSERT, UPDATE, MODIFY, and DELETE.
Arrays will be more efficient with column based architecture. Nested SELECTs will be causing more inefficiency (relatively speaking) then with row based databases.
Minimise the search overhead.
  • Define and use appropriate secondary indexes.
As secondary indexes are not required by SAP HANA, this rule has lost some of its importance.
Keep load away from the database.
  • Avoid reading data redundantly.
  • Use table buffering where possible and do not bypass it.
  • Sort data in your programs (unless ordering is with the primary table key).
In terms of HANA, you still want to keep unnecessary load away from the database. You DO however want to give the database your most data-intensive calculations to the database. This could be achieved by heavy SQL called from the application side or code pushdown to SAP HANA.
In example three there is a case of SQL code that produces a correct result set, but not in an optimized way.

FROM ekko INTO TABLE it_ekko

WHERE ebeln = lv_ebeln.


FROM ekpo INTO TABLE it_ekpo


WHERE ebeln EQ it_ekko-ebeln.

Example 3: Inefficient SQL.

There are three main problems with these statements. First of all, it triggers two separate roundtrips to the database. Secondly, the second SQL may result in unnecessary large result set that may never be used, as there is no check for an empty FOR ALL ENTRIES IN from the first SQL. Thirdly, the complete field list is fetched for both tables – which should only be the case if all fields will be used in the subsequent application logic.

As with the replacement of malfunctioning code in the first chapter, adopting these best practices should not be programmatically challenging – but the code bits worth changing may be difficult to locate. With that in mind, SAP has extended the previously discussed Code Inspector for this purpose. In the tool, you will find analysis of WHERE conditions, buffer bypass checks, nested SELECTs, unsecure FOR ALL ENTRIES checks and more.

Part 3 – Identify business processes

As the custom code potentially contains hundreds or even thousands of SQL statements, knowing where to start optimising can be a challenge. Trying to validate and correct all hits returned by the Code Inspector would be time consuming and not provide an immediate boost to the processes that have the most to gain. As most systems have some amount of dead code, some of the effort would be time wasted. Somehow, you would like to find processes that are time consuming in terms of SQL, high frequent or data intensive. Subsequently, you would want to combine those results with potential code optimising from the Code Inspector.
Most systems have large amounts of code that is unused (dead). Getting rid of dead code will be beneficial in terms of reducing maintenance effort of your system. SAP Usage and Procedure Logging is a tool that can help identify the code you can delete from your system. The tool integrates with the Custom Code Lifecycle Management in Solution Manager.

Dead code, SAP UP Logging

[SAP Active Global Support, Ref. 3]

In order to find and prioritize processes that are expensive in terms of database calls or volume, SAP has provided the New SQL Monitor. The tool can be activated in your SAP environment without disturbing the business processes, and could be executed even before migrating to HANA. It will provide performance data on all OPEN SQL statement executed in the system [Ref. 1: Bresch, Beghardt & co.].

By letting the tool run in your production environment, you will be provided with valuable logs that can be sorted and filtered in several ways and dimensions. It will tell you which SQLs have the highest frequencies of use within the timeframe, and which ones are the most expensive in terms of runtime and load. Starting your optimizing efforts with basis in this result set would make sense. That would certainly be the case if you can identify some extreme cases of high frequent SQLs or processes that stand out by their high execution time. However, it is likely that you will be faced with a long list of SQLs that are both frequent, data intensive and time consuming.

As a result of this we would want to find the areas that both have the potential for optimization and are showing up high on your SQL monitor log. This can be done with the SQL Performance Tuning Worklist. It combines findings from both the Code Inspector and the New SQL Monitor. It syndicates execution time, amount of data involved and potential code deficits and then point to the exact bits of code where you should place your effort.


In order to prepare your custom code for the migration to SAP HANA, you need to make sure that your code still will work and not result in functional errors. After that step the focus will shift towards optimising your processes. There are several tools that are suggested in order to achieve this, and a set of rules and best practises that should be in focus for quality management and programmers. The end result should be smooth migration of your custom code along with the SAP standard code, and a notable performance boost on your prioritised business processes.


Ref 1: “CD200: Tune Your Custom ABAP Code – Get Ready for SAP HANA” by Stefan Bresch, Boris Gebhardt, Jens Lieberum, Johannes Marbach, as presented at SAP TechEd in Amsterdam, November 2013.

Ref 2: “Recommendations for migrating suite systems to SAP HANA”, by SAP, SAP Note 1785057 v7, found at https://websmp209.sap-ag.de/sap/support/notes/1785057 November 2013.

Ref 3: “ITM114: Real Software Utilization with Usage and Procedure Logging” by SAP Active Global Support, as presented at SAP TechEd in Amsterdam, November 2013.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Mathias Montag

    Hi Simen,

    Good explanations, I will use some parts of your blog for our new course “SAP Business suite on HANA”. If you want, you could be a lectur ! 🙂

    kind regards from Berlin


    1. Simen Huuse Post author

      Thank you Mathias! Glad to hear that you found the content inspiring. Looking forward to reading your blog.



  2. Muthukumar Pasupathy

    Hi Simen,

    Thanks for the nice blog post. What would be your suggestion on handling custom code that checks for DB Secondary Indexes ( via FM: DB_EXISTS_INDEX ; DD_INDEX_NAME etc) How should they be handled during HANA migration.

    1. Simen Huuse Post author

      Hi Muthukumar!

      I quess you will not find this very commonly used in your custom code. My best tip would be to remove the checks if possible.


  3. Patrick Hughes

    Hi Simen,

    you said “Secondly, we need to identify, examine and possibly replace code that relies on implicit sorting done by the database.”

    I’m looking at our custom code on a sandbox with HANA.

    For example, I ran code inspector against a custom program with a select:

       SELECT *

              FROM /sapapo/ret_tmst

              INTO TABLE i_/sapapo/ret_tmst

              WHERE target_version EQ c_000.

    Firstly it didn’t highlight anything (apart from the SELECT *), I had to go into code inspector defaults (Utilities – Default Check variant – Create/Maintain) and switch on check “Robust programming – Search problematic statements for result of SELECT/OPEN CURSOR without ORDER BY”. Then I reran and a subsequent READ TABLE i_/sapapo/ret_tmst … INDEX 1. appeared as not an error or warning, but as an “Information” message.

    My company has a programming standard that all SELECTs are into internal tables like code above. This means we’ll have to modify every SELECT in our custom code to use ORDER BY the key of the table !!! (or do a SORT after the SELECT)

    Is my understanding correct ?

    1. Shyam Balachandran

      Hi Patrick,

      You don’t need to modify all the SELECT statements. Suppose after a SELECT statement you either perform READ..BINARY SEARCH or DELETE ADJACENT DUPLICATES you need to make sure that the results of the internal table the SELECT returns are sorted as these operations rely on sorted results.

      The ATC didn’t return the result as the DEFAULT check variant does not have the necessary checks enabled. You need to use the FUNCTIONAL_DB check variant (Check variant FUNCTIONAL_DB – SAP note 1935918 ) .

      – Shyam

  4. Ashish Bhalerao

    Hi Simen,

    Very nice article.

    Just one question, is there a standard report in SAP / Solution manager which can provide list of custom code which does not comply with part 1  ?



Leave a Reply