Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

I was curious about the inner workings of HANA. When you change data in column store tables, what is the effect? How much is written to the data files, how much change happens to the backup dumps, how much is written to the log files? With a real SAP system, you can measure the activity of the HANA database, but it is hard to measure the actual amount of changed data in some kind of controlled way. As usual, SAP provides some documentation, but not in the area I am interested in (with the usual disclaimer "as far as I can see"). The real fun with SAP always starts with reverse engineering, so I decided to create my own lab experiment.

My lab consists of:

- HANA 1.0 revision 70

- 7.4 GB or 49.108.572 rows of real-life ASCII data

- resulting in a 960 MB column store table

I wanted to measure in a controlled way the reaction of the database to changes in this table. First I delete some data, then I insert the same amount (but different) data. These two steps should simulate an update. I had a look at the in-memory table size. I created a backup dump and measured how much of the data file and backup dump file have changed. I tried to identify the written log files. This table summarizes my measurements:

Description
Delete #1

Insert #1

Delete #2
Insert #2
Delete #3
Insert #3
Delete #4
Insert #4Delete #5
Insert #5
MEMORY_SIZE_IN_MAIN998.832.0851.049.845.0691.041.197.2931.071.021.2531.075.204.3491.086.465.0051.089.642.4211.117.598.5011.124.042.5891.146.048.837
RECORD_COUNT47.359.89349.187.72647.620.72849.179.12047.542.71949.117.83845.957.84649.701.91143.134.82549.712.183
delta rows1.748.6791.827.8331.566.9981.558.3921.636.4011.575.119

3.159.992

3.744.0656.567.0866.577.358
delta size (raw ASCII Bytes)276.500.919288.948.984247.715.063246.271.066258.70.592249.046.220499.568.790591.892.3411.038.293.4591.039.845.787
delta size (column store Bytes)9.232.86051.012.9848.647.77629.823.9604.183.09611.260.6563.177.41627.956.0806.444.08822.006.248
changed datafile (Bytes)135.090.1761.318.801.4081.112.358.9121.369.174.0161.119.375.3601.366.949.888149.934.0803.299.258.36891.615.2323.882.795.008
ratio of changed datafile0,0360,3510,2960,3640,2980,3640,0400,8190,0230,851
delta backupfile (Bytes)48.971.7761.066.991.6161.060.003.8401.117.069.3121.073.582.0801.098.489.85662.418.9441.202.388.99290.517.5041.284.259.840
ratio of changed backupfile0,0370,7800,7860,7900,7930,7670,0430,8350,0610,844
logfiles written (Bytes)16.314.368239.538.17614.798.848204.611.58414.266.368206.983.16825.640.960486.739.96855.025.664856.743.936
Annotations3,2% changesdelta merge3,2% changesdelta merge3,2% changes6,5% changes13% changes

  

The table contains a lot of information, let me summarize my most interesting findings:

  1. After the delete #2 and delete #3 I manually performed a delta merge on the table, which is of course not necessary or outright useless. Surprisingly, this delta merge has got a huge effect on the data file and the backup dump file. Delete #1 was comparable in size, but without a manual delta merge and shows only very small changes to the data file and the backup dump file.
  2. Delta merges after inserts have always significant effects on the data file and backup dump file. The changes to the data file are typically several times higher than the actual amount of changed data. This is no issue for HANA since these writes happen asynchronously, but it is important to know that the amount of change to the datafiles is not representative for how much change actually happened to the data.
  3. Even though the delta merge causes significant changes to the table representation both in the data file and in the backup dump file, it is still possible for most real-life SAP systems to use backup deduplication because only a tiny percentage of the overall data is changed per day. (I verified that on an actual BW on HANA system.) However, I predict that there is some critical threshold, if too many table (partitions) are changed and therefore reorganized via a delta merge then it won't be possible to benefit from a backup deduplication any more.
  4. Only delete #1 and Delete #2 showed a reduction in the memory consumption of the table. Delete #3, Delete #4 and Delete #5 increased the memory consumption of my sample table! Also the data file and the backup dump file size increased over the time by ~20%. Contrary to that, the amount of data was the same at the beginning and at the end of my experiment.
  5. The amount of log files written by HANA correlates very well with the amount of (raw ASCII) data inserted into the database. So as a rule of thumb, if you want to know how much data has been added to or updated in your HANA instance, have a look at the log files.
  6. Deletes are very efficient in terms of log files, only few data needs to be written to the logs. Don't expect any significant reduction of the in-memory space requirements, however.

In case of questions or if something was unclear, please ask.

3 Comments
Labels in this area