Skip to Content

Technical details about data aging

If you successfully finished my last blog post about data aging ‘General Iinformation about data aging’ it is time for the deep dive. How SAP has implement it and how it works in detail.

As you already have read partitioning is an elementary part of data aging process to separate the current from the historical. Therefor range partitioning is used with an additional column called ‘_DATAAGING’:

Short separation for the the two parts:

Current data is the data relevant to the operations of application objects, needed in day-to day-business transactions. The application logic determines when current data turns historical by using its knowledge about the object’s life cycle. The application logic validates the conditions at the object level from a business point of view, based on the status, execution of existence checks, and verification of cross-object dependencies.


Historical data is data that is not used for day-to day-business transactions. By default, historical data is not visible to ABAP applications. It is no longer updated from a business point of view. The application logic determines when current data turns historical by using its knowledge about the object’s lifecycle. The application logic validates the conditions at object level from a business point of view, based on the status, executing existence checks, and verifying cross object dependencies.


Limitation: There can only be one current partition with max. 2 billion rows, but there can be multiple ones for the historical part.

If you activate data aging for one object / table you only can select it via a special syntax. The SAP HANA-specific database shared library (DBSL) in the ABAP server adds a corresponding clause to the SQL statements that are sent to SAP HANA. The classes ABAP CL_ABAP_SESSION_TEMPERATURE and CL_ABAP_STACK_TEMPERATURE enables the data access for the historical data.


By adding the clause WITH RANGE_RESTRICTION (‘CURRENT’) to a SQL statement, SAP HANA restricts the operation to the hot data partition only.
This restricts the operation to all partitions with data temperatures above the specified value. The clause WITH RANGE_RESTRICTION (‘20120701’), for example, tells SAP HANA to search the hot partition and all cold partitions that contain values greater or equal than ‘20120701’. Range restriction can be applied to SELECT, UPDATE, UPSERT, DELETE statements and to procedure calls.




The query will select the current partition 1 and partly the partition 2. HANA won’t load the complete partition 2 into memory! Cold partitions make use of Paged Attributes. While ordinary columns are loaded entirely into memory upon first access, Paged Attributes are loaded page-wise. Ideally only the pages that hold the requested rows are being loaded.


It is possible to configure the amount of memory used by page loadable columns. The parameter are a little bit confusing. The defaults in megabyte or procent are:



The first ones are set with a default of 999TB!

The last two ones setting a relative lower and upper (*_rel) threshold for the total memory size of page loadable column resources per service in percent of the process allocation limit.

When the total size of page loadable column resources per service falls below the minimum of the two threshold values (page_loadable_columns_min*) resulting from the corresponding parameters (= effective lower threshold value), the HANA System stops unloading page loadable column resources from memory with first priority based on an LRU strategy and switches to a weighted LRU strategy for all resources.

When the total memory size of page loadable column resources per service exceeds the minimum of the two threshold (page_loadable_columns_limit*) values resulting from the parameters the HANA System automatically starts unloading page loadable column resources from memory with first priority based on an LRU strategy.

You can set them by hana studio interface or via sql command (example value 50GB):

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'System' ) SET ('memoryobjects', 'page_loadable_columns_min_size') = '51200' WITH RECONFIGURE;



You can define a partition range for every table. For instance you can define a partition per year and if the partitions are getting too big you can repartition (only splitting) them from yearly to monthly:


But be careful, currently it is not possible to merge partitions with the transaction DAGPTM (tested with release: S/4 1610 FP1). So start with a high level range (year) and split them if needed.

Known Bugs

Note Description Fixed with
2509513 Indexserver Crash at UnifiedTable::
eposition During Table Load of Cold Paged Partition
>= 122.12 (SPS12)
>= 012.01 (SPS01)
2497016 Pages Belonging to Cold Partitions Created With Paged
Attribute Are Not Unloaded by The Resource Manager if
They Are Pinned by an Inverted Index
>= 122.10 (SPS12)
>= 002.01 (SPS00)
>= 012.00 (SPS01)
2440614 SAP HANA: SQL error for MDX statement
745 Patch Level 415
749 Patch Level 210
750 Patch Level 27
751 Patch Level 17
752 Patch Level 7
 2128075  AppLog: Short dump ASSERTION_FAILED


You must be Logged on to comment or reply to a post.
  • Hi Jens, thanks for showing! Just sitting in hands-on-session to explore data aging.

    Do you know where to set the residence time for customer (Z-) tables?




    • Hi Enno,

      I also attended Richards hands-on-session in Barcelona. It is planned to centralize the residence time.

      Currently this should work with TX DAGPTC => Edit partitioning objects => edit partitioning object with new threshold




      • Hi Jens, Enno,

        defining residence times for aging is at the moment per object, i.e. the aging objects have each their own residence time customizing, described in the documentation and respective notes that are linked with the central note  2315141 (Collective note for Data Aging Framework). Most of the objects also provide a default residence time, e.g. 15 days for the Basis objects.

        If you want to create own aging objects for z-tables, you can create a corresponding customizing possibility on your own as part of the development and/or hard-code a default residence time within the aging logic as part of the new object.

        In case you are just enhancing existing aging objects with z-tables, the same residence time applies for the z-tables that are valid for the corresponding leading object. You can veto single object instances during an aging run, though, by implementing the corresponding enhancement BAdI that is offered by the aging objects that are marked as being extendable.

        What Jens mentioned with respect to threshold values in transaction DAGPTC is something different and not related to residence times at all: This is a setting or rather an internal fine-tuning possibility that we use in combination with SAP S/4HANA Cloud and has no relevance for on premise.

        Warm regards,




  • Hello Jens,
    Very informative & helpfull blog on data aging.Thankyou for sharing this.we are looking to implement the hana data aging in our landscape. Regarding this I have below two queries which i could not fine the answear anywhere. It will be great help if you can help me with the answear to these two queries.


    1. As data aging involves automatic table level change like addition of the column “_DATAAGING” to the concern table & other non-manual changes done by the data aging in the database. Is there any transport request generated for this whole data aging change process? If no such change request is generated during the process then will not there be a table structure inconsistency between different systems in the landscape considering if i had done the data aging for that table in one system & in other system i didnot do it(like between DEV & QA)?

    2. So when i will schedule the periodic background job for taking care of the future growth of the table, how the future growth will be hanle.
    For example i have done a data aging where i have three partition 1. for 2017 data(cold) 2. for 2018 data(cold) & 3rd for 2019 data(hot) & my data restriction says anything older then 1 year should be moved to cold storage in that case wil there be a 4th paritition created automaticly in 2020 & 2019 data will be moved to the 4th partition & then the 4th parititon be moved to cold storage?




  • Hi,

    I think the Default vaulues are now in Byte not MByte:




    best regards


  • Hello,


    I was wondering if parameters like page_loadable_columns_min_size are still used under HANA 2.0 SP04, because there is the new function NATIVE STORAGE EXTENSION which uses some similar or in my opinion identical techniques.

    Thanks for feedback or some hints where to get it.


    Best regards,