This is the second part of the blog series of HANA memory usage details and changes in SPS12
For now only the “used memory” part is interesting. May be you heard about “resident memory” => this is the OS view which is a not up-to-date value in cause of the deferred garbage collection / release of memory and can’t be taken as indicator.
“used memory” consists of:
- Table Data (Row Store + Column Store + System Tables)
- Code and Stack
- Database Management / Working Space (heap and shared memory)
- Column store
- Row store indexes
- Intermediate results
- Temporary structures
- SAP HANA page cache
You can check the usage of each area with SQL (see note 1969700 for details) : “HANA_Memory_TopConsumers” (statement adjustment: search for the term “modification section” and adjust AGGREGATE_BY = ‘AREA’):
So the second big area behind the main data inside the CS is heap memory with about 200GB (~42% of the used memory). That is a pretty big part, isn’t it? But what is behind it? Why HANA need the heap? Ok, let’s go deeper:
SELECT Service_name, EFFECTIVE_ALLOCATION_LIMIT, TOTAL_MEMORY_USED_SIZE, Code_Size,Stack_size, HEAP_MEMORY_ALLOCATED_SIZE, HEAP_MEMORY_USED_SIZE FROM M_SERVICE_MEMORY where Service_name='indexserver'
This statement shows the values of col store + heap area as heap memory. If you talk about heap you really mean CS + page cache + Intermediate results + MVCC + Delta Merge areas. With the normal memory overview statement you are not able to determine which area belongs together besides you read this blog or study a lot of notes and guides 😉
The area to analyse are the 192GB. To go deeper we need another statement.
You can check the usage of each area with SQL “HANA_Memory_TopConsumers_1.00.85” (statement adjustment: search for the term “modification section” and adjust ‘HEAP’ AREA)
The most parts of the heap have a good reason to be allocated, but there is also a area which called “Pool/PersistenceManager/PersistentSpace/DefaultLPA/Page”
This one is called the SAP HANA page cache. It is in this example the biggest part with about 150GB of 192GB.
It works similar to a file system cache of an unix system. Data which are retrieved from disk will be stored in this area. When this area will be shrinked is a decision of the system itself. It can take a long time and till this happens the allocated memory will be counted as “used memory”.
But this memory will be reused in case if the memory is needed for other objects, such as table data or query intermediate results.
This speeds up the work in some cases, but due to a bug the size of this cache can be unnecessary large with Revisions 110 to 122.04.
There is a workaround:
2301382 – High “Used memory” in Pool/PersistenceManager/PersistentSpace/DefaultLPA/Page after upgrade to HANA SPS11 or a higher SPS
Rev. 110 – 122.01: regular “resman shrink” scheduling
=> I don’t recommend this one because some minutes (depends on the workload) later the allocation is high again
SAP HANA 122.02 – 122.05: Set the unload_upper_bound parameter using the generated <UNLOAD_UPPER_BOUND_COMMAND>
=> after the limit is hit an automatically shrink will be triggered
! Both are only workarounds !
=> it can have incluence of your performance because shrinking the resource container can potentially evict objects other than virtual file pages from memory, for example, table data or query intermediate results
=> It can lead to column store table unloads
A working solution was released in Rev 122.06. Currently some customers reported high usage despite a higher revision. Therefore:
On newer Revisions increased sizes of the page cache is typically caused by other effects:
- SAP Note 2427897: Increased page cache due to takeover / recovery
- SAP Note 2403124: Optimization of SAP HANA page cache usage
The following setting turns off the optimization of keeping column table main persistent pages in memory:
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'internal_caching_for_main') = 'false' WITH RECONFIGURE ;
To check out the different areas you can use hdbcons
> pageaccess a ### DynamicPageAccess ### PageType SizeCls Disposition hasRefs Count MemorySize ModifiedCount ModifiedDiskSize ConvIdxPage 256k Temporary yes 1 524984 1 262144 ConvLeafPage 256k Temporary yes 220 65484320 83 21757952 FileIDMappingPage 256k Temporary yes 175 45997000 0 0 FileIDMappingPage 256k Shortterm yes 141 37060440 1 262144 ContainerDirectoryPage 256k Longterm yes 1204 316459360 15 3932160 ContainerNameDirectoryPage 256k Longterm no 7 1839880 0 0 UndoFilePage 64k Shortterm yes 489 32387448 92 6029312 VirtualFilePage 4k InternalShortterm no 28685 137458520 84 344064 VirtualFilePage 16k InternalShortterm no 25911 442559880 81 1327104 VirtualFilePage 64k InternalShortterm no 29514 1954771248 81 5308416 VirtualFilePage 256k InternalShortterm no 20549 5401099160 52 13631488 VirtualFilePage 1M InternalShortterm no 13979 14667773288 12 12582912 VirtualFilePage 4M InternalShortterm no 6546 27460470000 0 0 VirtualFilePage 16M InternalShortterm no 5762 96674328944 0 0 VirtualFileLOBPage 4k Shortterm no 106524 510463008 0 0 VirtualFileLOBPage 16k Shortterm no 12473 213038840 0 0 VirtualFileLOBPage 64k Shortterm no 1462 96831184 0 0 VirtualFileLOBPage 256k Shortterm no 179 47048360 0 0 VirtualFileLOBPage 1M Shortterm no 9 9443448 0 0 VirtualFileLOBPage 4M Shortterm no 1 4195000 0 0 TableContainerPage 4k Longterm no 509 2439128 0 0 TableContainerPage 4k Longterm yes 126540 606379680 2 8192 TableContainerPage 4k NonSwappable no 119 570248 0 0 TableContainerPage 4k NonSwappable yes 12819 61428648 100 409600 UnifiedTableDelta 4k Shortterm no 295 1413640 0 0 UnifiedTableDelta 4k Shortterm yes 615 2947080 14 57344 UnifiedTableDelta 16k Shortterm no 123 2100840 0 0 UnifiedTableDelta 16k Shortterm yes 65 1110200 6 98304 UnifiedTableDelta 64k Shortterm no 180 11921760 0 0 UnifiedTableDelta 64k Shortterm yes 43 2847976 3 196608 UnifiedTableDelta 256k Shortterm no 584 153498560 0 0 UnifiedTableDelta 256k Shortterm yes 90 23655600 0 0 UnifiedTableDictionary 4k Shortterm no 67 321064 0 0 UnifiedTableDictionary 4k Shortterm yes 7519 36031048 8 32768 UnifiedTableDictionary 16k Shortterm yes 174 2971920 1 16384 UnifiedTableDictionary 64k Shortterm yes 32 2119424 0 0 UnifiedTableDictionary 256k Temporary yes 212 55722080 12 3145728 UnifiedTableDictionary 256k Shortterm yes 382 100404880 8 2097152 UnifiedTableDictionary 1M Shortterm yes 1 1049272 0 0 UnifiedTableDictionary 4M Shortterm yes 1 4195000 0 0 UnifiedTableMVCC 256k Shortterm yes 30492 8014517280 24 6291456 total count = 434693 total referenced count = 181215 total modified count = 680 total memory size = 146.4GB total referenced memory size = 8977.2MB total modified disk size = 74.1MB
Here we can see that the biggest part was spent by the different block sizes of VirtualFilePage. This feature was introduced with SPS11 and higher. Some specific types of persistency pages (virtual file pages) are more likely to be added to the HANA page cache to improve performance.
You can see that the VirtualFilePages are kept with disposition “InternalShortterm” and VirtualFileLOBPage with “Shortterm” which is a kind of priority.
The mechanism that evicts objects from memory when needed will consider objects with the disposition “InternalShortterm” with higher priority than other objects, such as table data and query intermediate results. This eviction happens automatically before any out-of-memory (OOM) situation.
So the new caching behaviour will not lead to OOM situations.
As you can see this new feature which can lead to an increased “used memory” is not an issue, but it is not transparent if the sizing is wrong and there is really an issue, because the alerting also reacts to the threshold of 80% (default) of a service. In the overview of the HANA Administrator or the DBACOCKPIT you will see that the memory consumption is pretty high but not why.
It also can lead to a licensing issue, because this depends on the peak memory usage.
Can there be fragmentation in the heap memory? Yes of course, but normally HANA itself takes care with the garbage collection. You can check it with command:
hdbcons 'mm poolallocator'
A fragmentation up to 15% is acceptable. If you want to trigger the GC manually you can use the following command:
hdbcons 'mm gc -f'
V1.1 added heap fragmentation
V1.2 added new version 2301382 and some hints