Myth of HANA
since SAP HANA was available in the year 2011 (GA), I come across a lot of untruth about the new in-memory platform. As consultant I was able to talk to many costumers and other consultants on events like TechED, DSAG, Business partner days etc. Every time I was impressed after this long time that so much dangerous smattering is still out there. Some of them can be easily eleminated by reading the note 2100010 (SAP HANA: Popular Misconceptions)
The most answers to the statements are pretty easy to find in the offical notes, guides and other documents (blogs, presentations, articles etc.), but may it is an overload of information.
1) start time
2) cross SID backup
3) col / row store conversion
4) sizing *2
6) data fragmentation
7) persistency layer
8) high memory consumption HANA vs. Linux
10) Backup catalog
S stand for statement and A for the answer
Used SQL scripts are available in the attachment of note 1969700 – SQL statement collection for SAP HANA
1) Start time
S: “The start time (availability of the SAP system) must be 30 to 60min to load all data into memory”
A: Yes, to load all data into memory it takes some time, but for any DB it also takes time to fill its data buffer. For any DB the data buffer will be filled on first access of the data and stay there until the the LRU (least recently used) algorithm takes place and push it out of the buffer.
HANA is loading the complete row store on every start into memory. After this the system is available!
Short description of start procedure:
1) open data files
2) read out information about last savepoint ( mapping of logical pages to physical pages in the data file / open transaction list)
3) load row store (depends on the size and the I/O subsystem; about 5min for 100GB)
4) replay redo logs
5) roll back uncommited transactions
6) perform savepoint
7) load col table defined as preload and lazy load of col tables (async load of Column tables that were loaded before restart)
For more details have a look at the SAP HANA Administration guide (search for “Restart Sequence”) or the SAP HANA Administration book => Thanks to Lars and Richard for this great summary!
Test DB 40GB NW 740 system with a none enterprise storage (=slow):
read: 33mb/s avg-read-size: 31kb avg-read-time: 0,93ms write: 83mb/s avg-write-size: 243kb avg-write-time: 2,85ms row store size: 11GB CPU: 8vcpu (vmware; CPU E5-2680 v2 @ 2.80GHz)
Start time without preload: AVG 1:48
Stop time without preload: AVG 2:15
start time with 5GB col table (REPORSRC)
SQL for preload (more information in the guide “SAP HANA SQL and System views Reference”):
alter table REPOSRC preload all
verify with HANA_Tables_ColumnStore_PreloadActive script from note 1969700 – SQL statement collection for SAP HANA
Start time with preload: AVG 1:49
Stop time with preload: AVG 2:18
Why the start time don’t increase although 5GB more data have to be loaded?
Since SPS 7, the preloading, together with the reloading, of tables happens async directly after the HDB restart has finished. That way, the system is again available for SQL access that do not require the information of the columns that are still being loaded.
With enterprise hardware the start times are faster!
If you want to know how long it takes to load all data into memory you can execute a python script.
load all tables into memory with python script:
cdpy (/usr/sap/HDB/SYS/exe/hdb/python_support/) python ./loadAllTables.py --user=System --password=<password> --address=<hostname> --port=3xx15 --namespace=<schema_name>
[140737353893632, 854.406] << ending loadAllTables, rc = 0 (RC_TEST_OK) (91 of 91 subtests passed), after 854.399 secs
In a simular enterprise system it takes about 140-200sec.
2) Cross SID backup
S: “It is not possible not refresh a system via Cross-SID-copy”
A: Cross SID copy (single container) from disk is already available since a long time. Since SPS09 it is also available via backint interface.
Multitenant Database Container (MDC) for a Cross-SID-copy are currently (SPS11) only able to restore via disk.
3) Col / row store conversion
S: “Column tables can’t be converted to row store and vice versa. It is defined by sap which tables are stored in which type.”
A: It is correct that during the migration the SWPM (used for syscopy) procedure creates files in which store the tables are created.
But you can technically change the type from row to column and vice versa on the fly. But there must be a reason for it, e.g. in advise of SAP Support. If you have no depencies to the application, e.g. custom tables or a standalone HANA installation for your own applications, you can choose freely.
In the past SAP delivered a rowstorelist.txt with note 1659383 (RowStore Liste für SAP Netweaver 7.30/7.31 auf SAP HANA Database). This approach is out-dated. Nowadays you can use the latest version of SMIGR_CREATE_DDL with the option “RowStore List” (Note 1815547 – Row/ColumnStore Check ohne rowstorelist.txt)
4) Sizing * 2
S: “You have to double the sizing the result of the sizing report.”
A: Results of Sizing reports are final, you dont have to double them.
|SIZING DETAILS |
| (For 512 GB node) data [GB] total [GB] |
| incl. dyn. |
| MASTER: |
| ——- |
| Row Store 53 106 |
| Master Column Store 11 21 |
| Caches / Services 50 50 |
| TOTAL (MASTER) 114 178 |
| SLAVES: |
| ——- |
| Slave Column Store 67 135 |
| Caches / Services 0 0 |
| TOTAL (SLAVES) 67 135 |
| ————————————————————— |
| TOTAL (All Servers) 181 312 |
This is a scale up solution. So Master and Slave are functional on one host. In a scale out solution you have one host as master for the transaction load. This one holds all row store tables. SAP recommends to have a min. of 3 hosts in a BW scale out solution. The other 2 slaves are for the reporting load.
Static and dynamic RAM
SAP HANA Main Memory Sizing is divided into static and the dynamic RAM requirement. The static part relates to the amount of main memory that is used for the holding the table data. The dynamic part has exact the same size as the static one and is used for temp data => grouping, sorting, query temp objects etc.
In this example you have:
row store 53 *2 = 106GB
Master column 11*2 =21(rounded) + 67*2= 135 (rounded) => 156GB
Caches / Services 50GB is needed for every host
106+156+50 in sum 312GB
S: “Statistics are not needed any more. So no collect runs are needed”
A: For the Col store the Statement is correct in cause of the known data distribution through the dictionary. For the row store there is an automatically collection of statistics on the fly. So you don’t have to schedule them. Currently it is not documented how you can trigger the collection or change sample size.
6) Data Fragmentation
S: “You don’t have to take care of data fragmentation. All is saved in memory via col store and there is no fragmention of data”
A: Some tables are created in the row store. The row store still follows the old rules and conditions which results in fragmentation of data. How to analyze it?
Please see note 1813245 – SAP HANA DB: Row store reorganization
SELECT HOST, PORT, CASE WHEN (((SUM(FREE_SIZE) / SUM(ALLOCATED_SIZE)) > 0.30) AND SUM(ALLOCATED_SIZE) > TO_DECIMAL(10)*1024*1024*1024) THEN 'TRUE' ELSE 'FALSE' END "Row store Reorganization Recommended", TO_DECIMAL( SUM(FREE_SIZE)*100 / SUM(ALLOCATED_SIZE), 10,2)"Free Space Ratio in %" ,TO_DECIMAL( SUM(ALLOCATED_SIZE)/1048576, 10, 2) "Allocated Size in MB" ,TO_DECIMAL( SUM(FREE_SIZE)/1048576, 10, 2) "Free Size in MB" FROM M_RS_MEMORY WHERE ( CATEGORY = 'TABLE' OR CATEGORY = 'CATALOG' ) GROUP BY HOST, PORT
Reorg advise: if row store is bigger than 10GB and more than 30% free space
!!!Please check all prerequesites in the notes before you start the reorg!!! (online / offline reorg)
Row Store offline Reorganization is triggered at restart time and thus service downtime is required. Since it’s guaranteed that there are no update transactions during the restart time, it achieves the maximum compaction ratio.
Row Store Size: 11GB
in %: 27% (no reorg needed)
But for testing I configured the needed parameters in indexserver.ini (don’t forget to remove them afterwards!):
4min startup time => while starting the row store will reorganized in offline mode
Row Store Size: 7,5GB
in %: 3,5%
Additionally you should consider the tables with multiple containers if revision is 90+. Multiple containers are typically introduced when additional columns are added to an existing table. As a consequence of multiple containers the performance can suffer, e.g. because indexes only take effect for a subset of containers
The compression methods of the col store (incl. indexes) should also be considered.
As of SPS 09 you can switch the largest unique indexes to INVERTED HASH indexes. In average you can save more than 30 % of space. See SAP Note 2109355 (How-To: Configuring SAP HANA Inverted Hash Indexes) for more information. Compression optimization for those tables:
UPDATE "<table_name>" WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'FORCE')
Details: 2112604 – FAQ: SAP HANA Compression
7) Persistency layer
S: “The persistency layer consists of exactly the same data which are loaded into memory”
A: As descibed in statement 3) the memory is parted into 2 areas. The temp data won’t be stored on disk. The persistency layer on disk consists of the payload of data, before&after images / shadow pages concept + snapshot data + delta log (for delta merge). The real delta structure of the merge scenario only exists in memory, but it is written to the delta logs.
Check out this delta by yourself:
check memory usage vs. disk size
8) High Memory consumption HANA vs. Linux
S: “The used memory of the processes is the memory which is currently in use by HANA”
A: No, for the Linux OS it is not transparent what HANA currently real uses. The numbers in “top” are never maching the ones in the hana studio. HANA communicates free pages not instantly to the OS. There is a time offset for freed memory.
There is a pretty nice document which explaines this behaviour in detail:
The garbage collection takes by default pretty late. If your system shows a high memory consumtion the root cause may not necessarily a bad sizing or high load. The reason could also be a late GC.
2169283 – FAQ: SAP HANA Garbage Collection
One kind of garbage collection we already discussed in 6) row and col fragmentation. Another one is for Hybrid LOBs and there is one for the whole memory. Check out your current heap memory usage with HANA_Memory_Overview.
In my little test system the value is 80GB. In this example we have 14GB for Pool/Statistics , 13GB for Pool/PersistenceManager/PersistentSpace(0)/DefaultLPA/Page and 9GB for Pool/RowEngine/TableRuntimeData
Check also the value of col EXCLUSIVE_ALLOCATED_SIZE in the monitoring view “M_HEAP_MEMORY”. It contains the sum of all allocations in this heap allocator since the last startup.
select CATEGORY, EXCLUSIVE_ALLOCATED_SIZE,EXCLUSIVE_DEALLOCATED_SIZE,EXCLUSIVE_ALLOCATED_COUNT, EXCLUSIVE_DEALLOCATED_COUNT from M_HEAP_MEMORY where category = 'Pool/Statistics' or category='Pool/PersistenceManager/PersistentSpace(0)/DefaultLPA/Page' or category='Pool/RowEngine/TableRuntimeData';
Just look at the index server port 3xx03 (may be the xsengine is also listed if active)
In cause of a lot of deallocation there is a gap between the EXCLUSIVE_ALLOCATED_SIZE and the currently allocated size. The difference is usually free for reuse and can be freed with a GC run.
But by default the memory GC will be triggered by default in the following cases:
|Parameter + Default value||Details|
|async_free_target = 95 (%)||When proactive memory garbage collection is triggered, SAP HANA tries to reduce allocated memory below async_free_target percent of the global allocation limit.|
|async_free_threshold = 100 (%)||With the default of 100 % the garbage collection is quite “lazy” and only kicks in when there is a memory shortage. This is in general no problem and provides performance advantages, as the number of memory allocations and deallocations is minimized.|
|gc_unused_memory_threshold_abs = 0 (MB)||Memory garbage collection is triggered when the amount of allocated, but unused memory exceeds the configured value (in MB).|
|gc_unused_memory_threshold_rel = -1 (%)||Memory garbage collection is triggered when the amount of allocated memory exceeds the used memory by the configured percentage.|
The % values are related to the configured global allocation limit.
Unnessarily triggered GC should be absolutely avoided, but it depends on your system load and sizing how you configure these values.
The unused memory will normally be reused by the HDB (free pool), so there is need to trigger the GC manually. But in some cases it is possible that a pool uses more memory. This should be analyzed (1999997 – FAQ: SAP HANA Memory 14. How can I identify how a particular heap allocator is populated?)
If we now trigger a manual GC for the memory area:
hdbcons 'mm gc -f'
total used free shared buffers cached
Mem: 129073 126877 2195 15434 142 32393
-/+ buffers/cache: 94341 34731
Garbage collection. Starting with 96247664640 allocated bytes.
82188451840 bytes allocated after garbage collection.
total used free shared buffers cached
Mem: 129073 113680 15393 15434 142 32393
-/+ buffers/cache: 81144 47929
So at this time inside the hdb there is in this scenario not so much difference, but at the OS side the not allocated memory will be freed.
You don’t have to do this manually! HANA is fully aware of the memory management!
If you get an alert (id 1 / 43) in cause of memory usage of your services, you should analyze not only row and col store. Take also care of the GC of the heap memory. In the past there were some bugs in this area.
ID 1: Host physical memory usage: low: 95% medium: 98% high:100%
ID43: memory usage of services: low: 80% medium: 90% high:95%
As you can see a GC will be triggered lazy at 100% fill ratio of the global allocation limit by default may be it is too late for your system before the GC takes place or you can react to it.
In addition to the memory usage check the mini check script and the note advices. If you are not sure how to analyze or solve the issue you can order a TPO service at SAP (2177604 – FAQ: SAP HANA Technical Performance Optimization Service).
S: “Restore requires logs for consistent restore”
A: wrong, a HANA backup based on snapshot technology. So the backup is consistent without any additional log file. This means it is a full online copy of one particular consistent state which is defined by the log position at the time executing the backup.
Sure if you want to roll forward you have to apply Log Files for point in time recovery or most recent state.
10) Backup Catalog
S: “Catalog information are stored in a file like oracle *.anf which is needed for recovery”
A: The backup catalog is saved on every data AND log backup. It is not saved as human readable file! you can check the catalog in hana database studio or with command “strings log_backup_0_0_0_0.<backupid>” in the backup location of your system if you make backup-to-disk.
The backup catalog includes all needed information which file belongs to which backup set. If you delete your backups on disk/VTL/tape level the backup catalog still holds the unvalid information.
Housekeeping of the backup catalog
There is currently no automatism which clean it up. Just check the size of your backup catalog if it is bigger than about 20MB you should take care of housekeeping (depends on your backup retention and size of the system) the backup catalog, because it will be saved as already mentioned EVERY log AND data backup. This means more than 200 times a day! How big is your current backup catalog of your productive HANA system? Check your backup editor in hana studio and click on show log backups. Search for the backup catalog and select it => check the size.
At the end you also have to take care of your data housekeeping and resource management. You can save a lot of resources if you consider all the hints in the notes.
I hope I could clarify some statements for you.
# Edit V4
2100010 – SAP HANA: Popular Misconceptions
(Thanks to Lars for the hint)
V4: Updated statistics (5); row/col statement adjusted, format adjusted
V5: adjusted format and added details for backup catalog
Hmm... I'm torn on this one.
What I like is that most of the information provided are backed by a reference and that these references all point to working URLs (and not end up in nowhere or the SAP corporate network). That's a big plus in my eyes and often overlooked here in SCN.
What's surprising however is that the obvious SAP note for this sort of claims and facts (http://service.sap.com/sap/support/notes/2100010 "SAP HANA: Popular Misconceptions") isn't even mentioned.
And point 5) Statistics is not quite right.
There is no background hidden job that runs and collects statistics for row store tables.
What happens with both row and column store tables is that the optimizer needs to make estimations about distinct value counts and total record counts. Since the data is in memory and the data structures are build for that, SAP HANA can gather these information fairly quickly on-the-fly when needed for the query optimisation and execution.
But there indeed is a use case where it can be important to explicitly collect and store optimizer statistics: virtual tables,that is tables that actually are remote tables in a different database.
Advocating the inverted hash key is IMHO usually a mistake and a band aid at best for the vast majority of all use cases. It's a very, very, very OLTP feature and not designed to support queries.
For the "row store tables cannot be converted" statement, I don't think the right answer is to state "sure you can do this! Here use this report...". Instead I'd point out that while it is technically possible to convert tables from row to column store and back, SAP (just as any other software vendor) decides which of the application tables should be stored how.
So, the user is not free to pick the technical storage for SAP application tables.
My last remark here would be about the formatting. The whole thing doesn't look all that nice and a bit "through the wind"...
Hope to not have discouraged you now - I really like to see more of this know-how sharing!
thanks for your comment. The formatting was destroyed by the type migration from blog to DOC. I have adjusted it and added your hints for statistics and row / col store conversion.
The point inverted hash indexes is covered in every TPO (for BW) report of SAP. That was the reason to mention it here.
Regarding the inverted hash index recommendation: Large BW indexes are often primary keys or unique indexes with the main purpose to guarantee uniqueness. Furthermore the implicit single column indexes on the indexed columns remain unchanged when you perform a conversion to inverted hash. So in many cases the benefit of space reduction is higher than the risk of query performance regressions. But of course you should test these changes carefully and decide individually.
thank for your opinion. A lot of customers evaluate a high risk in this conversion and skipped the task in their housekeeping list, because also the test effort was to high.
As of October 2017: inverted hash indexes not supported in BW, and several other restrictions
2259872 – Changes to generated BW tables with database tools