Skip to Content
Author's profile photo John Appleby

Best Practices for SAP HANA Data Loads

This is a repost of content first authored on saphana.com “Best Practices for SAP HANA Data Loads“. Hope that the SCN audience like it too!

I’ve been involved in a few SAP HANA projects where performance has been initially disappointing, and I have been asked to come and show the customer HANA speed. As I’ve said before, you can take the best technology in the world, create a bad design, and it will work badly. Yes, even SAP HANA can be slow.

With that in mind, here are my best practices for HANA data loading.

1) Build the right architecture

Every technology has limits, and HANA is constrained by the same laws of physics as any other computer system. The major differences with HANA are:

a) Data is kept entirely in-memory and therefore accessing all data has the same latency

b) Data is compressed in-memory and only uncompressed in the CPU itself

c) Data may be loaded into all cores (up to 80) simultaneously and a single instruction executed against all at the same time (massively parallel)

If you think about all this, then by far the major challenge with HANA is getting optimized data in fast enough. Networks bottom out at 300MB/sec, disks are often even slower and if you need to load HANA data fast, then you need to think carefully about your design.

2) Optimizing the data model

HANA doesn’t work like Oracle, so you need to adjust your thinking. Here are the golden rules for HANA tables:

  • Keep tables as narrow as possible. HANA has a cost for columnar inserts, so you get a reduction in data load performance with an increase in the number of columns. Don’t build out fields you don’t need. Interestingly, very wide tables have lower throughput in MB/sec than narrower tables. Once you get over 80-100 columns, you will see up to a 25% degradation in throughput.
  • Consider your partitioning strategy. On a single node appliance 3-5 partitions is normally a good number for optimal insert performance. On a scale-out appliance, you can multiply this by the number of active nodes over which you distribute the tables.
  • Avoid using keys if you don’t need them. Actually, avoid using the HANA database for referential integrity if you can, it is expensive.
  • Avoid using VARCHARs if you don’t need them. Fixed width CHAR is better for smaller fields, and INTEGER fields are even better. VARCHARs have a cost, and don’t compress as well.
Good Bad

CREATE COLUMN TABLE “SCHEMA”.”TABLE” (

     “TIME” TIME,

     “FIELD1” CHAR(1),

     “FIELD2” CHAR(16),

     “FIELD3” CHAR(4),

     “MEASURE1” INT,

     “MEASURE2” DECIMAL(7,4))

PARTITION BY ROUNDROBIN PARTITIONS 12;

CREATE COLUMN TABLE “SCHEMA”.”TABLE” (

     “TIME” TIMESTAMP,

     “FIELD1” VARCHAR(255),

     “FIELD2” VARCHAR(255),

     “FIELD3” VARCHAR(255),

     “MEASURE1” BIGINT,

     “MEASURE2” DECIMAL(15,7)

     “UNUSED_FIELD” VARCHAR(2000));

Note that in my test example here, the table on the left loads 6x faster. Note that I use 12 partitions because I have a 4-node environment with one master node, and the table partitions distributed over the remaining nodes – 4 partitions in each of 3 nodes. This gives maximum table distribution and therefore throughput.

Note that merge deltas operate better with multiple partitions. But if you really want to learn about table partitioning, read “Table Partitioning in the SAP HANA database” section in the “SAP HANA Administration guide”, here: SAP HANA Appliance Software – SAP Help Portal Page

3) Get the upper bound by optimizing a control file load

I always test control file loads with HANA first, using CSV files. If your files are in the right format then control files are always the fastest way to load into HANA. They were written by the bulk loader team and you can get amazing load performance. This is how I do it:

a) Turn off database logging

On a single-node appliance, or a scale-out appliance like IBM or Hitachi that uses Fusion-IO for logs, you will get the same performance with and without logging enabled. The only difference is you will fill up the log disk and crash the database. Turn them off until you do your final testing.

You do this by double clicking on your database node on the left hand side of HANA Studio, click Configuration -> global.ini -> persistence. Set enable_auto_log_backup to no and log_mode to overwrite.

b) Get the file in CSV format with the minimum number of columns

I usually use bash script to do this, with a combination of grep, awk, sed, cut and even perl, if it gets ugly. You can convert fixed format, or any format, to nice clean CSV files. Perl even has a CSV library that you can use to do this. Here’s a neat example that sucks in your CSV and gives you the number of rows, columns and the maximum length of a column.

#!/bin/bash

typeset -a a END

let a=1

COLUMNS=`awk ‘BEGIN {FS=”,”} ; END{print NF}’ test.csv`

ROWS=`wc -l < test.csv`

echo $ROWS ‘ rows’

echo $COLUMNS ‘ columns’

while ((a

echo -n $a

cut -f $a -d\, < test.csv | awk ‘{ if (length($0) > max) {max = length($0); maxline = $0} } END { print maxline }’| wc -c

let a++

done

Quite often, I create demo data in my lab based on a customer’s schema. I use one of a few tricks to do this:


i) Excel


Excel is great for this and you can create a million rows at a time using tricky functions like this (put your values, weighted, in Sheet 2 Column A):


     =INDEX(Sheet2!$A:$A,RANDBETWEEN(1,COUNTA(Sheet2!$A:$A)),1)

Then I copy the CSV file to the HANA appliance and run a bunch of commands like:

     for a in {1..9} do cat test.csv |sed s/2013-02-04/2013-02-0$a/; done >> testbig.csv

     for a in {10..28} do cat test.csv |sed s/2013/02-04/2013-02-$a/; done >> testbig.csv

This allows me to explode a 1m row file to as many rows as I like.


ii) Data Generator


For more complex files I use a tool called DTM Data Generator. This allows me to generate complex files at about 10,000 rows/sec to a CSV file (don’t use it to generate data directly into HANA, performance sucks). Overnight I can generate about 360m rows, which is enough for testing. It costs $149 for the Standard Edition, which allows 10m rows per job. So I create the job I want and duplicate it 50 times. If I wasn’t so cheap, I’d buy the Enterprise Edition at $349, which allows 2bn rows per job and multithreads.


c) Time the load of the file to /dev/null

# du -m table.csv

30358     table.csv

# time cat table.csv > /dev/null

real 0m9.667s

So I’ve put this 346m row, 30GB file on a Fusion IO disk and I can read it at 3140MB/sec (or 35.8m rows/sec). I know now that this is the absolute maximum possible to ingest into HANA. Bear in mind that in the real world, it’s really hard to move data around at more than 100MB/sec – networks are slow.

d) Build a control file

Now I create a control file, like this:

cat > /sapmnt/log/test.ctl

import data

into table SCHEMA.TABLE

from ‘/sapmnt/log/test.csv’

record delimited by ‘\n’

field delimited by ‘,’

optionally enclosed by ‘”‘

error log /sapmnt/log/test.bad

And what’s great is we can iterate over a) the number of partitions b) the number of threads c) the batch size

e) Make sure the load runs well

Make sure you aren’t generating lines in the BAD file, because this slows downloading substantially. The data needs to be clean. Fix it with the UNIX tools above (usually sed, cut, awk, grep) if necessary.

f) Iterate, iterate, iterate

I make sure my CSV file has enough data to make HANA think (20-30GB is a good amount) and I iterate. For my example, my 346m row file is 30358MB (30GB). With the default options (1 partition, 1 thread, default batch size), the load happens in 1 hour, 35 minutes 30 seconds. This is not HANA speed!

So I guess a rough set of parameters:

  • 5 partitions per active table node (15 partitions in my example)
  • The same number of threads I have CPUs* active nodes (120 threads in my example)
  • Batch size depending on the width of the column. For a simple table like this, I run with 200000. For much wider tables with 50-100 columns, I start with 20000.

With the SQL below, I get improved performance down from 5730s to 88s. That’s more like it, and now we start tweaking to get the optimal settings for those 3 parameters via iteration. I expect this to improve performance a further 30% in most cases. You may not think it’s worth it, but that extra 30% matters to me.

DROP TABLE “SCHEMA”.”TABLE”;

CREATE COLUMN TABLE “SCHEMA”.”TABLE” (

     “TIME” TIME,

     “FIELD1” CHAR(1),

     “FIELD2” CHAR(16),

     “FIELD3” CHAR(4),

     “MEASURE1” INT,

     “MEASURE2” DECIMAL(7,4))

PARTITION BY ROUNDROBIN PARTITIONS 15;

ALTER TABLE “SCHEMA”.”TABLE” DISABLE AUTOMERGE;

IMPORT FROM ‘/sapmnt/log/test.ctl’ WITH THREADS 120 BATCH 200000;

MERGE DELTA OF “SCHEMA”.”TABLE”;

ALTER TABLE “SCHEMA”.”TABLE ENABLE AUTOMERGE;


And I run it lots of times. I iterate on threads, then partition, then batch, and sometimes again. I build a table with load times and tune it until I feel like I’ve got HANA speed. The key is to copy and paste this statement 50 times into a notepad file and then change the parameters for each load. Execute it as one long SQL statement and leave it running overnight, loading, dropping and loading 50 times or more over.

What’s HANA speed? In my example down from 5730 seconds to 60 seconds – 95x performance increase. Nearly 6m fully committed rows/second and 500MB/sec throughput. On a single-node Medium appliance with 40 cores, you should expect around 275MB/sec of throughput into HANA. In any case we are around the throughput of 10GBe Ethernet.

Anyhow in most cases, control file loads aren’t a good way to load into HANA long term, because they lack flexibility and the ability to process bad data. But now you have an upper bound for how fast you can ingest. The Data Services team tells me they look to get performance that approaches the HANA bulk loader, for example.

4) Design the right Architecture

Now you know enough about your data to design the right loading architecture for HANA. Note one really important thing: I like to think of HANA as a hungry caterpillar (reminds me of The Cure song Lullaby, but anyhow). How fast you get data into HANA is almost always limited, in a well designed environment, by how fast you can supply.

This means that you need fast storage where you are residing the data to be loaded, fast applications server threads to process it, and fast plumbing – always 10GB Ethernet – to move data around. If you don’t have all of this, your HANA in-memory appliance will go to waste. Don’t think you can run Data Services on a VMWare farm, for example, and expect to move data around at more than 80MB/sec.

From there, the choice of tool, or tools, is critical. Use this table as a rule of thumb.

Use Case Control File Data Services LT SRS ESP
Benchmarking and PoCs X O O O O
Batch Loads O X O
Transformations X O O X
Fixing Data Quality X O
Streaming Data X
Real-Time Replication O X O O
Replication From SAP O X O O
Tell me what else you want to see here!

X – Core purpose

O – offers the functionality

“-” – does not offer the functionality.

a) Control Files

As I’ve said before, Control Files are great for benchmarking and PoCs. And for beating up other ETL teams to get near their performance. After that, you aren’t likely to use them much in a productive system.

b) Data Services

Data Services is a great general purpose batch tool. It offers real-time web services but there are better options if you want real-time. It also allows great data transformation, matching and enriching, but every additional transform you add to your batch job will slow it down, so be careful.

In addition if you are loading files from weird formats, like fixed format files, then Data Services is great. There are lots of tips and tricks for SAP HANA to be found at http://wiki.sdn.sap.com/wiki/display/EIM/Data+Services but there are 3 golden rules I apply:

i) Do major transforms on the way into SAP HANA, not in SAP HANA. This advice will change because they are integrating the Data Services ETL engine into SAP HANA, but for now, get your data formats right (e.g. combine date and time into timestamp) on the way in, rather than using materialized views in SAP HANA, which is expensive.

ii) Dial the 3 main parameters (Parallel Process threads in the file loader, bulk loading on the target table, commit size in the HANA table loader and loaders in the HANA table loader) in for your scenario. I start with 40 file loader threads, 10,000 commit size and 4 HANA loaders and tweak it from there. Data Services takes care of optimized merge deltas, which is very cool.

iii) Make sure you have plenty of power on the Data Services box – you can easily consume 16-20 cores on x86 CPU, make sure the threads are fast, and make sure you have 10GBe Ethernet from HANA -> Data Services.

c) SAP Landscape Transformation (LT)

LT is a very neat solution for replicating tables from SAP systems. It is trigger-based so there is a small overhead, and it is therefore suited to low-mid volume transactional tables like sales orders. It can be used to replicate tables from non-SAP databases like Oracle and this makes sense in a SAP shop. If you’re not a SAP shop then you wouldn’t use LT because it requires a SAP NetWeaver platform (either standalone for LT, or built into another SAP product like the Business Suite).

In these scenarios, LT is easy to set up to replicate tables and this is especially cool with SAP’s Rapid Deployment Solutions (RDS), which provide pre-packaged HANA content that consume database tables from LT.

The main things to note with LT are the need for a Unicode LT system (can be used against non-Unicode source ERP) and the need to patch your ERP system to a recent kernel, which may trigger a regression test for your production environment. Also note that ERP, LT and HANA must be on a dedicated LT 10GBe Ethernet network for acceptable latency.

d) Sybase Replication Server (SRS)

SRS was originally the replication technology of choice for SAP ERP scenarios, before the advent of LT. It has several benefits over LT, because it scrapes database logs, which has lower latency and overhead than the trigger-based LT. Stock Exchanges in Capital Markets use SRS to replicate trading data between continents, it’s that fast.

You do however need to have a full Enterprise license for your source DB, and many SAP customers buy a restricted runtime license via the Software Application Value (SAV) program, which allows them to rent their Oracle or IBM database for 8-15% of their Application Value. If you fall into this category, then you cannot use SRS legally.

In addition, because it scrapes database logs, SRS has some technical dependencies like Unicode, certain versions and certain databases.

But if what you want is a high-performance sidecar of an existing transactional database into SAP HANA, then SRS is a very interesting product, if niche.

e) Sybase Event Stream Processing (ESP)

ESP is the dark horse of the ETL family because it can do everything, and it can do some things very well. Its core purpose is stream processing, and you can look at the number of different streams it supports here: SyBooks Online – including Tibco Rendezvous, Capital Markets streams like Wombat and FIX, and files.

Its flexibility is excellent, and you can read fixed-format, CSV, FIX, XML or any other kind of data and output it to a stream, or to a database or file adapter. What’s more, ESP is being integrated into the HANA database so you will be able to do all of this in-memory. ESP is low-latency and can shift a very large amount of messages a second (I hear millions), depending on the hardware and networks you have at your disposal. What’s more you can store a buffer of data in memory for doing time stream processing.

As with any stream processor, when you hit saturation, you will start to page to RAM for incoming or outbound messages, then when RAM is filled up, to disk, and then bad things happen. As with Data Services, you can adapt the number of inbound and outbound feeds to get the performance you need, and adjust the output batch accordingly. Note that ESP is very well integrated with the SAP HANA ODBC interface and you can expect to get excellent performance.

5) Benchmark and iterate

Now you have chosen a product for your loading, you need to benchmark and iterate to try and get as close to the control file performance as possible.

In my tests, I found that the bulk loader performed about 2x as well as anything else, but this is sharing CPU capacity with SAP HANA. I expect that by putting Data Services or ESP onto a separate system with low-latency, high-bandwidth 10GBe network, you can get close to 70-80% of bulk loading performance.

Conclusions

A number of people I spoke to suggested that a set of best practices for data loads wasn’t possible, because every customer is different. I’d counter that by saying that a set of best practices is required, to understand which tool should be used and when, depending on a potentially complex set of criteria. My first conclusion is I believe it is simple to select the right tool with some thought.

The second learning I have is that each data set performs slightly differently with HANA because of the complexities of massive parallel processing, columnar compression and column widths and complexities, plus the need to transform data. Once you have selected the correct tool, it is necessary to iterate to get optimal settings – and iterating is well worth it.

The third learning is to make sure you do not compare HANA bulk loading to Oracle bulk loading. HANA bulk loading is fully report-ready data. Oracle bulk loading literally dumps data into a database row, with no indexes, aggregates or save point. It breaks the ACID properties of the database until a backup is made. There is no point of making this comparison.

And a big shout out to:

The IBM X5 appliance that I use for testing. It is a 4x512GB = 2TB, 160 core appliance in a 22U rack.

All my contacts at SAP who have assisted in this, including Werner Daehn, David Rosenblum, Werner Steyn, Stefan Sigg, Keith Wood and Tom Ziek.

Lloyd Palfrey at Bluefin who helped out on the HANA appliance side and Kevin Turnbull and team for creating virtual machines at short notice and generally putting up with me.

Assigned Tags

      35 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Henrique Pinto
      Henrique Pinto

      Hi John,

      great blog, as always.

      I do hope we see bulk loader-like performance with DS, very soon.

      One thing about ESP: even though it kind of does the same thing, I'm not sure you can put in the "ETL" pack, simply due to the fact that it does not guarantee persistence. Meaning, you could have a record in the input stream that will be calculated properly by the algorithms you apply but you cannot be 100% sure you'll see that record in the output adapter.

      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      Hi Henrique,

      Thanks. I think you're right... and wrong...

      It's true that ESP, like any stream processor, is "unreliable" in its nature because it does not guarantee consistency.

      However there are not that many examples when this can happen in the real world - there has to be a bottleneck in the system which causes a backup or network failures.

      And if what you require is real-time data then unless you use LT or SRS to just replicate another database, you need a stream. And if you did use LT or SRS to replicate another database, presumably that other database was populated by a stream, if it is consuming real-time data 🙂

      John

      Author's profile photo Henrique Pinto
      Henrique Pinto

      However there are not that many examples when this can happen in the real world - there has to be a bottleneck in the system which causes a backup or network failures.

                         

      True. However that's exactly why RDBMSs and ETLs have logs, to recover from such potential errors. ESP is neither (actually, any CEP tool, for that matter) and doesn't log anything.

      And if what you require is real-time data then unless you use LT or SRS to just replicate another database, you need a stream. And if you did use LT or SRS to replicate another database, presumably that other database was populated by a stream, if it is consuming real-time data 🙂

      Oh there I agree with you. I thought we were talking about consuming from other DBs or CSV, XML, TXT files as you had mentioned in the blog. If your input is the stream itself, then you don't have many other options.

      On the other hand, though, what we really needed in this case was not the whole CEP engine, but just the stream input adapter and the high speed HANA output adapter (ESP 5.1 SP01+). The input adapter is not much more than something like a HTTP listener that then triggers the records (events, in CEP terminology) to be passed through the whole calculation flow (needless, in this loading case). SAP says, in the long term RTDP strategy, that ESP and HANA will be somehow merged. If we were to have such a native input stream directly into HANA, then how awesome would that be? 😉

      Author's profile photo Rama Shankar
      Rama Shankar

      Thanks John !

      Author's profile photo Nigel James
      Nigel James

      Actually, avoid using the HANA database for referential integrity if you can, it is expensive

      Hey John, I would guess that for a lot of scenario's referential integrity would be reasonably important. Can you elaborate on this?

      Thanks,

      Nigel

      Author's profile photo Henrique Pinto
      Henrique Pinto

      I suppose he meant from a BI data model perspective.

      Author's profile photo Former Member
      Former Member

      Even I am looking forward to get an elaborated explanation to why not use referential integrity on HANA and what would be the turn around if we encounter such scenarios.

      Thanks,

      Chandan Murthy

      Author's profile photo Raj Kumar S
      Raj Kumar S

      Amazing block and very much Impressed 🙂

      Author's profile photo Former Member
      Former Member

      thanks for sharing, that's really impressed.

      Author's profile photo Former Member
      Former Member

      Hi everyone,

      it's a great blog John thanks for sharing your valuable infos. I want to add something to the part "ii) Data Generator", if someone is interested in data generators and data generation I refer to our wiki: https://wiki.wdf.sap.corp/wiki/display/hampton/Data+Extractor+and+Generator+%28DEG%29

      This is an internal tool set we developed for the purpose of extracting the structure of a database and generating data based on extracted information.

      Cheers,

      Aryan

      Author's profile photo Former Member
      Former Member

      Hi John

      Thanks for the well written article . Can I ask,in your example Is HANA used as an underlying database for a replication from SAP ERP/ECC or is it a custom reporting Database.

      We have a situation where we need to load(data migration) SAP ECC hosted on HANA database and our basis team thinks that direct load to HANA bypassing the application layer is possible using SAP data services. I am thinking that its not possible because in that case what program will make sure that integrity of the system is intact. As of what I understand, what ever is the underlying db, we will have use proven methods in LSMW or we can use underlying IDOC/BAPI  from data services .

      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      Hi,

      Either/or/both, depending on the customer.

      With your situation where you plan to run ERP on HANA, it would be unusual to load directly into the tables, because as you say, you want to ensure data integrity. Typically data is migrated using either a database migration, or by using a tool like TDMS.

      For times when you want to consolidate from other non-SAP ERPs you might use LSMW or Data Services, but this also would be unusual.

      My advice though would be to get some independent advice from an ERP on HANA architect as to how best to migrate your data.

      John

      Author's profile photo Former Member
      Former Member

      Hi John

      Just making it clear that my requirement is from non-SAP legacies to SAP ECC.

      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      Interesting scenario. There are lots of data migration tools like this:

      http://www.winshuttle.com/solutions-sap/by-function-task/data-upload-import/

      Need to figure out the right approach based on your source systems. Loading data directly into ERP base tables is not generally a supported scenario.

      Author's profile photo Deborah Castle
      Deborah Castle

      The use of CHAR vs. NVARCHAR surprised me a bit.  The CHAR data type isn't even documented in the HANA SQL reference.  I tested the CHAR in a create statement and it did work just fine.

      However, I looked at some standard tables and they seem to use the NVARCHAR exclusively for character fields.  Admittedly my data sample was small - I looked at KNA1, /MRSS/D_SQP_ESTR, and PA0006.

      By not documenting it, it seems they are discouraging the use.

      Author's profile photo Krishna Tangudu
      Krishna Tangudu

      Hi Deborah,

      Have a look on this discussion :

      While defining a columnar table, what is the purpose of column store type

      Regards,

      Krishna Tangudu

      Author's profile photo Deborah Castle
      Deborah Castle

      Thanks for the pointer.  I was curious that this article focused on VARCHAR, not NVARCHAR.  I have read comparison of VARCHAR vs. NVARCHAR and ended up using NVARCHAR, which is consistent with SAP standard tables.

      Such articles as follow claim that NVARCHAR is always used at a low level and to use VARCHAR requires conversion and code pages.

      sql server - What is the difference between varchar and nvarchar? - Stack Overflow

      Author's profile photo Justin Molenaur
      Justin Molenaur

      John, has been some conversation recently about controlling the merge of tables during loads. Can you provide some more information about your testing results with regard to disabling auto merge during load, merging and then re-enabling merge after the load?

      I have done some other testing and was unable to confirm that manually controlling did not produce big returns in processing time, so would be interested to know what you observed.

      Also - just so I am clear - your statement is definitively that CHAR types will outperform VARCHAR in write, but does that also apply to read? This observation would be a little surprising since all SLT managed tables are predominately replicated with NVARCHAR data types, so it would be shortsighted on SAP's part.

      And lastly - I'm not sure what revision this was based on, but wondering if the same findings are valid for newer SPS07 revision as well.

      Regards,

      Justin

      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      So yes, managing merges is a good thing if you have a lot of data. Otherwise, you can get memory bubbles and you can merge whilst you load a partition, which means you need to merge twice.

      In many cases however you may not perceive a difference, if your partitioning strategy is sane, and you don't have huge files that take a long time to load.

      Last time I checked CHAR does outperform, especially for smaller data types. This doesn't really make any sense.

      I've not retested this stuff with SPS07.

      Author's profile photo Justin Molenaur
      Justin Molenaur

      John, you know I can't leave this alone without some testing results 🙂

      Set up something quick to test 3 points on a ~120 million row insert

      - Insert speed CHAR vs. NVARCHAR with auto merge enabled

      - Insert speed while controlling merge process manually vs. automerging

      - Column size in memory after loads

      - SELECT speed

      In this test I did not partition, and the structure is limited to only one column so I can try to isolate the above points as best as possible. I was literally just inserting from one table into the two targets using a very high cardinality value, so I know this won't compress. I may follow up a test using data that will compress well.


      Structures

      CREATE COLUMN TABLE "MOLJUS02"."TEST_TYPE_CHAR" ("BELNR" NVARCHAR(10)) UNLOAD PRIORITY 5 AUTO MERGE;

      CREATE COLUMN TABLE "MOLJUS02"."TEST_TYPE_NVARCHAR" ("BELNR" CHAR(10) CS_FIXEDSTRING) UNLOAD PRIORITY 5 AUTO MERGE;

      Insert Speeds with auto merge enabled

      CHAR

      INSERT INTO "MOLJUS02"."TEST_TYPE_CHAR"

      SELECT "BELNR" FROM "SE2_SH2"."CE12000"

      WHERE "PERIO" IN ('2014001', '2013012')

      Statement 'INSERT INTO "MOLJUS02"."TEST_TYPE_CHAR" SELECT "BELNR" FROM "SE2_SH2"."CE12000" WHERE "PERIO" IN ...'

      successfully executed in 5:24.964 minutes  (server processing time: 5:24.977 minutes) - Rows Affected: 128319971;

      Looking in "SYS"."M_DELTA_MERGE_STATISTICS", MERGE 673s, MERGE 64s and SPARSE takes 50 seconds

      So total insert including Merge time takes 18m24s


      NVARCHAR

      INSERT INTO "MOLJUS02"."TEST_TYPE_NVARCHAR"

      SELECT "BELNR" FROM "SE2_SH2"."CE12000"

      WHERE "PERIO" IN ('2014001', '2013012');

      Statement 'INSERT INTO "MOLJUS02"."TEST_TYPE_NVARCHAR" SELECT "BELNR" FROM "SE2_SH2"."CE12000" WHERE "PERIO" ...'

      successfully executed in 4:17.398 minutes  (server processing time: 4:17.408 minutes) - Rows Affected: 128319971

      Looking in "SYS"."M_DELTA_MERGE_STATISTICS", MERGE 550s, MERGE 33s and SPARSE takes 50 seconds

      So total insert including Merge time takes 14m47s


      Insert Speeds with manual merge control

      CHAR

      ALTER TABLE "MOLJUS02"."TEST_TYPE_CHAR" DISABLE AUTOMERGE;

      INSERT INTO "MOLJUS02"."TEST_TYPE_CHAR"

      SELECT "BELNR" FROM "SE2_SH2"."CE12000"

      WHERE "PERIO" IN ('2014001', '2013012');

      MERGE DELTA OF "MOLJUS02"."TEST_TYPE_CHAR";

      ALTER TABLE "MOLJUS02"."TEST_TYPE_CHAR" ENABLE AUTOMERGE;

      Statement 'INSERT INTO "MOLJUS02"."TEST_TYPE_CHAR" SELECT "BELNR" FROM "SE2_SH2"."CE12000" WHERE "PERIO" IN ...'

      successfully executed in 5:27.636 minutes  (server processing time: 5:27.649 minutes) - Rows Affected: 128319971

      Statement 'MERGE DELTA OF "MOLJUS02"."TEST_TYPE_CHAR"'

      successfully executed in 33.951 seconds  (server processing time: 33.944 seconds) - Rows Affected: 0

      Looking in "SYS"."M_DELTA_MERGE_STATISTICS", SPARSE takes 35 seconds

      So total insert including Merge time takes 6m35s


      NVARCHAR

      ALTER TABLE "MOLJUS02"."TEST_TYPE_NVARCHAR" DISABLE AUTOMERGE;

      INSERT INTO "MOLJUS02"."TEST_TYPE_NVARCHAR"

      SELECT "BELNR" FROM "SE2_SH2"."CE12000"

      WHERE "PERIO" IN ('2014001', '2013012');

      MERGE DELTA OF "MOLJUS02"."TEST_TYPE_NVARCHAR";

      ALTER TABLE "MOLJUS02"."TEST_TYPE_NVARCHAR" ENABLE AUTOMERGE;

      Statement 'INSERT INTO "MOLJUS02"."TEST_TYPE_NVARCHAR" SELECT "BELNR" FROM "SE2_SH2"."CE12000" WHERE "PERIO" ...'

      successfully executed in 4:19.846 minutes  (server processing time: 4:19.855 minutes) - Rows Affected: 128319971

      Statement 'MERGE DELTA OF "MOLJUS02"."TEST_TYPE_NVARCHAR"'

      successfully executed in 30.462 seconds  (server processing time: 30.461 seconds) - Rows Affected: 0

      Looking in "SYS"."M_DELTA_MERGE_STATISTICS", SPARSE takes 35 seconds

      So total insert including Merge time takes 5m24s


      Column Sizes

      TEST_TYPE_CHAR - Total table size = 610,996 KB

      TEST_TYPE_NVARCHAR - Total table size = 610,997 KB

      SELECT

      Absolutely no difference when selecting.

      Conclusions

      - NVARCHAR actually has a FASTER insert by 16 - 20%

      - CHAR and NVARCHAR have virtually no difference when it comes to column size in memory

      - Controlling the merge process manually really doesn't achieve any efficiencies in the actual INSERT process, rather the merge process becomes more efficient after the fact. I think the "memory bubble" you mention is apparent in the auto merge example, where the entry in M_DELTA_MERGE_STATISTICS show a very long running MERGE step where no records are shown as merged, followed by a short 30s MERGE that actually does the work. I think on it's face, this just increases the resources needed after the insert and causes some delay in the data being optimized for reporting.

      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      So you're using too few rows to see the effects of merge. Try doing 1.2bn in 10 segments, using either automerge or load, merge, load, merge. It's in this circumstance you'll see an improvement.

      Plus, you have to use partitions if you want to see good load performance. Especially if you can load by partition and then merge, that's when you'll see the difference.

      Also make sure you load from flat file rather than another table, or weird things can happen.

      Interesting on the CHAR/NVARCHAR, guess they optimized this.

      Author's profile photo Justin Molenaur
      Justin Molenaur

      I'll do another round using a column that can be highly compressed and also amp up the  row count as you mention. This analysis is taking a couple turns.

      I would hope weird things don't happen when I pass data between tables internally within HANA 🙂 .

      Regards,

      Justin

      Author's profile photo Justin Molenaur
      Justin Molenaur

      Now let's take it a step further to show the power of partitioning as well. We know for a fact that smaller partitions always help, just want to quantify out in this example.

      We'll make the assumption that NVARCHAR is preferred based on the above analysis.

      ALTER TABLE "MOLJUS02"."TEST_TYPE_NVARCHAR" PARTITION BY HASH ("BELNR") PARTITIONS 10;

      Insert Speeds with auto merge enabled

      INSERT INTO "MOLJUS02"."TEST_TYPE_NVARCHAR"

      SELECT "BELNR" FROM "SE2_SH2"."CE12000"

      WHERE "PERIO" IN ('2014001', '2013012');

      Statement 'INSERT INTO "MOLJUS02"."TEST_TYPE_NVARCHAR" SELECT "BELNR" FROM "SE2_SH2"."CE12000" WHERE "PERIO" ...'

      successfully executed in 2:30.882 minutes  (server processing time: 2:30.886 minutes) - Rows Affected: 128319971

      Looking in "SYS"."M_DELTA_MERGE_STATISTICS", MERGE takes 60 seconds, SPARSE takes 40 seconds

      So total insert including Merge time takes 4m10s

      Insert Speeds with manual merge control

      ALTER TABLE "MOLJUS02"."TEST_TYPE_NVARCHAR" DISABLE AUTOMERGE;

      INSERT INTO "MOLJUS02"."TEST_TYPE_NVARCHAR"

      SELECT "BELNR" FROM "SE2_SH2"."CE12000"

      WHERE "PERIO" IN ('2014001', '2013012');

      MERGE DELTA OF "MOLJUS02"."TEST_TYPE_NVARCHAR";

      ALTER TABLE "MOLJUS02"."TEST_TYPE_NVARCHAR" ENABLE AUTOMERGE;

      Statement 'INSERT INTO "MOLJUS02"."TEST_TYPE_NVARCHAR" SELECT "BELNR" FROM "SE2_SH2"."CE12000" WHERE "PERIO" ...'

      successfully executed in 2:19.579 minutes  (server processing time: 2:19.532 minutes) - Rows Affected: 128319971

      Statement 'MERGE DELTA OF "MOLJUS02"."TEST_TYPE_NVARCHAR"'

      successfully executed in 1:03.479 minutes  (server processing time: 1:03.434 minutes) - Rows Affected: 0

      Looking in "SYS"."M_DELTA_MERGE_STATISTICS", SPARSE takes 30 seconds

      So total insert including Merge time takes 3m52s


      Conclusion

      - Use of partitioning has a high correlation to both insert and merge processing, which was known beforehand, but proven in context of this example. Also of interest, the total size of the table is about 50MB smaller.


      Regards,

      Justin

      Author's profile photo Justin Molenaur
      Justin Molenaur

      Larger data set ~500m with 10 partitions. A little surprise by the result since the manual control with the larger dataset actually performs about ~10% slower.

      Auto Merge in place

      Statement 'INSERT INTO "MOLJUS02"."TEST_TYPE_NVARCHAR" SELECT "BELNR" FROM "SE2_SH2"."CE12000" WHERE "PERIO" ...'

      successfully executed in 17:55.844 minutes  (server processing time: 17:55.877 minutes) - Rows Affected: 529254367

      Looking in "SYS"."M_DELTA_MERGE_STATISTICS", MERGE takes 180 seconds, SPARSE takes 180 seconds

      So total insert including Merge time takes 23m55s


      Manual Merge control

      Statement 'INSERT INTO "MOLJUS02"."TEST_TYPE_NVARCHAR" SELECT "BELNR" FROM "SE2_SH2"."CE12000" WHERE "PERIO" ...'

      successfully executed in 18:33.883 minutes  (server processing time: 18:33.921 minutes) - Rows Affected: 529254367

      Statement 'MERGE DELTA OF "MOLJUS02"."TEST_TYPE_NVARCHAR"'

      successfully executed in 4:31.946 minutes  (server processing time: 4:31.957 minutes) - Rows Affected: 0

      Looking in "SYS"."M_DELTA_MERGE_STATISTICS", SPARSE takes 180 seconds

      So total insert including Merge time takes 26m

      Author's profile photo Mustafa KUTLU
      Mustafa KUTLU

      Have you tested the import with 'table lock' parameter?

      It writes the data directly to main store and eliminates the need for delta merge.

      Author's profile photo Fernando Da Ros
      Fernando Da Ros

      Hi Mustafa,

      Do you have an documentation point to this "data directly to main store"?

      Or it's your experience that it's faster?

      The table lock means that you lock exclusively so the process with lock is the only one that can write on the table but it's just that.

      The only way I know that goes directly to main store is if you are importing a binary which you experted before.

      I look forward to hearing from you.

      Regards, Fernando Da Rós

      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      The last time I tested with table lock, it didn't make any difference. But that was back in 2012. Has anyone tested recently?

      Author's profile photo Mustafa KUTLU
      Mustafa KUTLU

      Please check the link below for table lock in import command.

      http://help.sap.com/hana_one/html/sql_import_from.html

      I have experienced that it does not produce delta and writes directly to column store.

      So no need for delta merge after import.

      I tested this in a very small vmware HANA instance and couldn't see much improvement compared to not using table lock. It may help in an original appliance.

      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      Yes I am aware of what the import documentation says, but my testing showed that there was no difference between:

      LOAD WITH TABLE LOCK

      or

      LOAD

      MERGE

      Has anyone seen any difference in practice?

      John

      Author's profile photo Hari Sagar Pacca
      Hari Sagar Pacca

      Great Blog!

      Author's profile photo Dantuluri Kalyan
      Dantuluri Kalyan

      Hi John,

      I have Question :

      Senario:

      - I have 5 different tables in my source system.

      - Data is getting replicated to HANA.

      - But as per the Business Requirements i have to Materialise all the 5 tables into 1 Table in   HANA(i.e. i am doing a union in 1 table based on 2 field),to achieve the Sub-Second response in my report(i.e. i am grouping 10 Characteristics and aggregating 10 measure).

      So my Question is can i replicate 5 table data into 1 table thought SLT. Basically can i union 5 table data into one table thought SLT.

      Please suggest!!

      Thanks

      kalyan

      Author's profile photo Benedict Venmani Felix
      Benedict Venmani Felix

      Hello John,

      Thanks for the excellent guidance. I have  a question regarding point 2, for which I have raised a question in the discussion space. Please take a look and let m know your thoughts.

      HANA Node vs Host and Table Partitioning

      -Benedict

      Author's profile photo Antony Jerald J
      Antony Jerald J

      Hello John,

      Nice document.  Thanks for sharing across..

      We are doing run time calculation with more than 2Billion records by using temporary tables, we are storing data for further calculation.  But the same throws "column store error" because of exceeding maximum table limitation after executing for more than 1.30 hrs.

      Is there any better way of handling the situation and optimizing the performance of procedure?

      Kindly suggest.

      Regards,

      Antony Jerald.

      Author's profile photo Former Member
      Former Member

      Hi,

      We are trying to read more than 12m records from Oracle source to HANA through SAP BODS 14.x version and getting No Space error. Have tried to split the data across quarters & Years but no luck. Any inputs will greatly be appreciated.

      Thanks,

      Abdulrasheed.

      Author's profile photo Jesús Antonio Santos Giraldo
      Jesús Antonio Santos Giraldo

      Excellent blog. Very usefull.