Skip to Content
Author's profile photo Jens Gleichmann

HANA – table unload priorities

many people still think all data in HANA which exists in the data area also loaded into memory all the time. Of course there are some techniques which influence the unload / placement of tables e.g. data aging and the non-active data concept of BW. Usually unloads happen based on a “least recently used” (LRU) approach, so the columns having not being used for the longest time are unloaded first. So HANA decides when it makes sense to unload some tables to free memory.

But what about your own tables? Can you also take influence when data are unoaded? For general information you can refer to 2127458 – FAQ: SAP HANA Loads and Unloads. Ok you can manually unload or load tables. But you want to do this automatically and want to understand when tables are loaded or unloaded.

Ok, may be you know the un-/load traces on the filesystem. But you don’t know why this happen.

At first you have to know that there are unload priorities:


Than you need to know about the different disposition weights a table can have:


You can select the current unload prio values from sys.tables:

select table_name, unload_priority from sys.tables;


You can see this dispositions if you use the tool hdbcons with option ‘pageaccess a’:

DefaultPageType SizeCls Disposition hasRefs Count MemorySize
ConvIdxPage 256k Temporary yes 1 262144
ConvLeafPage 256k Temporary yes 202 52953088
FileIDMappingPage 256k Temporary yes 16 4194304
FileIDMappingPage 256k Shortterm yes 252 66060288
ContainerDirectoryPage 256k Longterm yes 1019 267124736
ContainerDirectoryPage 256k Longterm no 22 5767168
ContainerNameDirectoryPage 256k Longterm no 59 15466496
UndoFilePage 64k Shortterm yes 425 27852800
VirtualFilePage 4k InternalShortterm no 53508 219168768
VirtualFilePage 16k InternalShortterm no 58164 952958976
VirtualFilePage 64k InternalShortterm no 40742 2670067712
VirtualFilePage 256k InternalShortterm no 23168 6073352192

 

RowStorePageAccess PageType SizeCls Disposition hasRefs Count MemorySize
ConvIdxPage 256k Temporary yes 1 262144
ConvLeafPage 256k Temporary yes 295 77332480
RowStorePage 16k-RowStore NonSwappable no 671744 11005853696

As you may be know the RowStore uses the shared memory of the linux system and with HANA it can’t be unloaded from memory. So the type RowStorePage has every time the disposition ‘Nonswappable’.


Per default a CS table has the unload priority 5. The last access is 10 hours ago.

The Page Cache has the unload priority short term and the last access is one hour ago.

The column table has the lower result value (270 vs. 300) and so it is unloaded earlier than the pages of the page cache.


If there are tables that should in general be replaced earlier or later, you can prioritize unloads using the UNLOAD PRIORITY setting:

ALTER TABLE "<table_name>" UNLOAD PRIORITY <priority>;

The SAP standard tables shouldn’t be changed without any good reason.


Now you know how and when your tables are unloaded. If you need more details you can use the following notes:

2127458 – FAQ: SAP HANA Loads and Unloads

2222110 – FAQ: SAP HANA Load History

1999997 – FAQ: SAP HANA Memory

 

Regards,
Jens

Assigned Tags

      7 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Nandan Tadahal
      Nandan Tadahal

      Thanks Jens for the detailed explanation. The un-load priority can also be checked from SE14 transaction under storage parameters option for tables.

       

       

      Author's profile photo Matthew Wong
      Matthew Wong

      Hi  Jens Gleichmann

      Thank you for this intro to table priorities. I am wondering how I can fully unload some "Full" "Loaded" tables on my Hana? The tables are just backup and I dont want to keep them in memory at all. All the tables have unload priority = 5 but when i do big queries, i always get memory allocation failures.

      Thank you,

      Matt

      Author's profile photo Matthew Wong
      Matthew Wong

      I found two options:

      UNLOAD "<table_name>" 
      
      and
      
      ALTER TABLE "<table_name>" WITH PARAMETERS ( 'UNUSED_RETENTION_PERIOD' = '<unused_retention_period_s>' );
      
      
      I guess my question is say I Have a HANA 64 GB. My Column tables are loaded on 32 GB of memory. I want to do a join that requires 48 GB of memory. Will 16 GB of tables be unloaded automatically for this join to work?
      
      Do memory allocation errors or out-of-memory errors occur when joins are too complex? What causes these errors, and is the only solution to buy more memory?
      Author's profile photo Jens Gleichmann
      Jens Gleichmann
      Blog Post Author

      Hi Matt,

      at first which HANA revision your are currently using? If this tables are backup tables which should not be loaded why you don’t set unload prio to 0? Have checked which heap areas are too big? Normally the working area should be big enough to meet your requirements, but what about the OOM dump which was created? So it depends on how your memory is used.

      Regards,

      Jens

       

      Author's profile photo Jalina Vincent
      Jalina Vincent

      Hi Jens,

      Thank you for the wonderful document. I can able to see all PSA and system tables in Sys.tables. But i want to see the details of other tables (active data tables of DSO). Can you pls share the table names. Please share.

      Thanks.

       

      Author's profile photo Jens Gleichmann
      Jens Gleichmann
      Blog Post Author

      Hi Jalina,

      it depends on your namespace how they are named. Please use table RSDODSO to find the table names to query sys.tables. It should be something like /*/A* . Please check the official documentation

       

      Regards,

      Jens

      Author's profile photo Indu Khurana
      Indu Khurana

      Hello Jens,

      Thanks for the document.

      Can you please clarify on below point.

      I have created a system versioned (SV) table with a history table with unload priority as 5.

      The data in SV table is getting written in upsert mode. So, whenever I write in the system versioned main table, the history table also gets updated with same number of records on every execution, irrelevant of the fact whether there is a change in the data or not in the main SV table.

       

      Ideally shouldn't it be moving to History table, if there is change in records in main table , not every time, right?

      Can you please suggest?

       

      Thanks,

      Indu Khurana.