Skip to Content
Technical Articles
Author's profile photo Jens Gleichmann

[HANA] NSE part I – tech. details Q&A

last updated: 2021-11-30

SAP NSE was introduced with HANA 2.0 SPS04 and based on a similar approach like data aging. Data aging based on a application level approach which has a side effect if you are using a lot of Z-coding. You have to use special BADI’s to access the correct data. This means you have to adapt your coding if you are using it for Z-tables or using not SAP standard functions for accessing the data in your Z-coding. This was big hurdle to use this feature. But in this blog I don’t want to talk about old stuff, because there are a lot of information already out there in form of blogs (SAP HANA Native Storage Extension: A Native Warm Data Tiering Solution by Robert Waywell []; SAP HANA Native Storage Extension(NSE) – Increase HANA Data Capacity With Warm Storage by Jeetendra Kapase [],  NSE: Systematically Measure Memory Savings by Daniel Utvich []), documentation, slides and technical guides. This means this blog series will not cover general stuff, but will focus on technical details and limitation which I have already encountered in my projects.

We’ll start with some common questions I get asked when the first workshop takes place:

  1. Does NSE cost additional licenses?
  2. Can I use NSE for BWoH?
  3. Which tables are perfect candidates for NSE?
  4. How does the NSE structure look like?
  5. Is there any holistic advice which table should be used for NSE in every system?
  6. When should you use the NSE advisor?
  7. Is it difficult to activate NSE?
  8. Are there any prerequisites on HANA level?
  9. How can we see which data is loaded into the buffer cache?
  10. Are there more views with useful details about NSE?
  11. Should we use archiving or NSE?
  12. Can we also use indexes for NSE?
  13. How big is the benefit of NSE?
  14. How can we estimate the savings?
  15. Can be NSE and pmem used together for a better TCO?
  16. Do I have to adjust my topology regarding system resources, HSR, backup etc.?
  17. Is NSE slower every time?
  18. How the delta merge is working with NSE?
  19. Is the NSE setting visible in the backend (ABAP dictionary)?
  20. Is the NSE attribute lost if we clean-up a table?
  21. Do we can use it with scale-out?
  22. We are already using data aging. Do we need to switch?
  23. What about LOB fields? Can we also use NSE to reduce the memory footprint of them?
  24. What is the difference between heterogeneous and non-heterogeneous (homogeneous) partitioning?
  25. How should I partition my table to take advantage of NSE on partition level?
  26. How can the performance impact be tested?
  27. Which are the important notes I have to check before using NSE?
  28. Can I switch between ‘normal’ hot store and page loadable (NSE / warm store)
  29. Can I pin or preload the buffer cache?
  30. Which partitioning schemes are supported?
  31. How the syntax looks like to activate NSE?
  32. How should I size my buffer cache?
  33. Do I have to do anything after NSE is active for the candidates?
  34. How to determine appropriate tables for NSE usage?
  35. Can we activating NSE during the migration towards to HANA

 

Q: Does NSE cost additional licenses?

A: No, it is already inclduded in your SAP HANA license (regardless if runtime or full use). It is covered by the phrase “data tiering option”. But it still costs memory. The size of the buffer cache is also normal DRAM, which has to be licensed regarding the amount of memory. Please keep this in mind. The larger the buffer cache, the smaller the savings.

 


Q: Can I use NSE for BWoH?

A: No, it is not supported by SAP, but you can use it on your own risk. With BW/4HANA it is integrated and fully supported by SAP. AFAIK it is not on the roadmap of BWoH regarding my last talk with the product manager. You can use it with BWoH but you have to consider some tasks if you drop tables and recreate them in full load scenarios. This means additional administration effort. Use it carefully with estimation of your savings vs. effort.

 


Q: Which tables are perfect candidates for NSE?

A: Tables which have a high change rate and a low read rate.


Q: How does the NSE structure look like?

A: The data structure look pretty much the same like the normal column store. Only at the persistence layer there is a difference how the dictionary is stored. Here is a little overhead regarding the normal CS structure.

 


Q: Is there any holistic advice which table should be used for NSE in every system?

A: It always depends on your scenario and your expectations. But to start using NSE you can use the ESS (embedded statistics server) tables. This tables are not part your business workload. They are used for the performance und workload data of the system. This means they are only used for read load when you have some issues and your DBA or SAP is analyzing the system.

Enable/Disable NSE (PAGE LOADABLE) for column tables of HANA statistics server

But there are several starting points like EDID4, ZARIX* or BALDAT.

 


Q: When should you use the NSE advisor?

A: This is just my opinion and may be other HANA specialists using the advisor in another way and had different experiences, but I’m not a fan of it. The advisor only covers a small timeframe and some special cases and scenarios will not be included. The advisor also has a little performance overhead which can be significant up to 20-30%. It should run for min. 7 days. As starting point or for simple scenarios it is a wonderful tool. For complex business cases I prefer the manual analyses.

There are also some known issues regardings the NSE advisor. At the time writing this blog there were no knwon issues in HANA revision >= 57.00 . If you want to use it use this revision.

 


Q: Is it difficult to activate NSE?

A: No, the activation is not complex, but to determine the savings including the real benefit with consideration of the performance and the partitioning design in context of the growth / scaling is the supreme discipline. There is no simple methode – every system is used in another way. No setup is identically. No user behavior is exactly the same. Every NSE analyses starts nearly at zero.

 


Q: Are there any prerequisites on HANA level?

A: Yes, enough space on /hana/data , because the NSE persistence will use a little bit more disk space due to the dictionary metadata overhead – just about 10%. You need at least HANA 2.0 SPS04 and the new persistence format for the affected table.

SELECT * FROM M_CS_COLUMNS_PERSISTENCE WHERE PERSISTENCE_TYPE = 'VIRTUAL_FILE' or PERSISTENCE_TYPE = 'VIRTUAL_PAGED'

If there is an output please consult note 2927591and follow the migration procedure.

 


Q: How can we see which data is loaded into the buffer cache?

A: This is a frequent changing snapshot, but if you want to see which data is currently inside the buffer cache use the view M_CS_ALL_COLUMNS and the COLUMN MEMORY_SIZE_IN_PAGE_LOADABLE_MAIN

SELECT * FROM M_CS_ALL_COLUMNS WHERE LOAD_UNIT = 'PAGE‘;

 


Q: Are there more views with useful details about NSE?

A: Yes, here we go:

SELECT * FROM M_CS_TABLES WHERE LOAD_UNIT = 'PAGE';

SELECT * FROM M_CS_COLUMNS_PERSISTENCE;

SELECT * from M_BUFFER_CACHE_STATISTICS;

SELECT * from M_BUFFER_CACHE_POOL_STATISTICS;

 


Q: Should we use archiving or NSE?

A: Both! NSE is for warm data which is still accessed and may be changed. Archiving is for cold data which shouldn’t be changed any more. Use NSE for old data which still has to be accessed /changed and can not be archived or for tables which can not be archived (no archive objects).

 


Q: Can we also use indexes for NSE?

A: Yes, you can page out single indexes which are note frequently used. If you choose a whole table as candidate for NSE all indexes are paged out by default! Also if you choose a special partition for NSE, the part of the index for this partition is also paged out!

 


Q: How big is the benefit of NSE?

A: This depends on your system, the usage and the end user behavior. If the system includes more than the current year you can calculate with min. 15% savings but in average for OLTP systems (ERP) you can save 20-25%.

 


Q: How can we estimate the savings?

A: There some methods:

Peak sizing (full loaded table incl. indexes + working space)

Payload calculation (full loaded table incl. indexes)

Used payload calculation (only average loaded data)

In the end, you have to ask yourself from which perspective you want to act. The sizing or the usage perspective? The worst case or the normal business case scenario? Just do a before and after estimation of the memory size in detail degree of column level. Then subtract the buffer cache and you will get your savings.

Example:

You put the table EDID4 which has a full loaded size of 300GB into NSE. This means all columns and indexes are paged out. Normally only 20% are used. 100GB buffer cache are used.

Summary:

Peak sizing: 300GB *2 (payload + working space) – 100GB (buffer cache) = 500GB savings

Payload calculation: 300GB (payload) – 100GB (buffer cache) = 200GB savings

Used payload calculation: 60GB – 100GB (buffer cache) = no savings?

If you only need 60GB and it is the only table which you activated NSE for, you won’t save really memory with this perspective. If your buffer cache hit ratio is always at 100% and the buffer cache won’t grow to its max. size you should downsize it to save more memory.

 


Q: Can be NSE and pmem used together for a better TCO?

A: Yes, the main part, which normally placed in DRAM can be stored with pmem. The buffer cache heap allocator Pool/CS/BufferPage resides in the DRAM and can not be placed into the pmem. This means every data loaded into the buffer cache is placed into the DRAM. You can combine this two feature if you place one partition (hot) into pmem and one into NSE. The performance impact needs to be measured when your needs are impacted, but technically it works.

 


Q: Do I have to adjust my topology regarding system resources, HSR, backup etc.?

A: No, it is a seemless and smooth integration. All data is included in HSR and the backups. The needed memory for the buffer cache should be smaller than the sum of your paged out data. So, in the end you save twice the data in memory (payload) and the workings space for the sizing.

 


Q: Is NSE slower every time?

A: When the data is loaded into the buffer cache the access is nearly 1:1 compared the default setting (‘COLUMN’). But in some scenarios I have observed that the partitioning design has speed up the selects and especially the inserts/updates/deletes.

If the data is not loaded into the buffer cache and it is thus the first access, you can add in average about 10-20% of database time. But this has to be tested carefully for every scenario.

 


Q: How the delta merge is working with NSE?

A: My tests resulting in a degradation of merge duration of factor 4 to 8. It depends on the status of the page attribute (loaded/unloaded into buffer cache) and the merge parameters. There was nearly no overhead regarding the memory. The buffer cache was not used for the delta merge procedure!

 


Q: Is the NSE setting visible in the backend (ABAP dictionary)?

A: It depends on your release. It is fully integrated as table setting since S/4HANA 2020. It is supported for table conversions in the following releases:

Supported%20basis%20releases

Supported basis releases

S/4HANA 2020

In SAP S/4HANA 2020, the load unit for ABAP-managed HANA column store tables (that is, load unit setting on table level) can be defined via the ABAP Data Dictionary, in the technical settings for the table (transaction SE13). Changing the SAP-delivered setting for a given table technically represents a modification.

S/4HANA 2021

Like in SAP S/4HANA 2020 the load unit setting defined in the ABAP Data Dictionary is applied on table level. The load unit for columns or partitions can be set via database means, e.g. using the SAP HANA cockpit.

Starting with SAP S/4HANA 2021, there are two flavors (preferred or enforced) for specifying the load unit for ABAP-managed HANA column tables in the ABAP Data Dictionary (in the technical settings for the table, transaction SE13):

Preferred

  • default setting
  • Changing from one preferred load unit to another does not change the load unit on the database
  • During a table conversion, the actual NSE settings of the runtime object in the HANA database will be preserved

Enforced

  • Changing the enforced load unit results in a corresponding change on the database
  • The ABAP DDIC consistency check takes the load unit into account. Different values for the load unit in the DDIC and on the database result in tables that are inconsistent from the DDIC point of view.

The ‘Preferred’ flavors for the load unit setting in the ABAP DDIC offer the possibility for a database administrator or a tool like the SAP HANA Cockpit to overwrite the DDIC settings without making the table inconsistent. Please see the following section for the ABAP tool to check and (if required push-down) a preferred load unit setting to the HANA database.

Note for developers: In general, most of the tables should use one of the ‘Preferred’ settings for the load unit. Only if you are sure that the table should keep a specific load unit in all relevant scenarios (on premise, cloud, …), you should use the Enforced load unit settings. The load unit setting in the ABAP Data Dictionary is chosen by the owner / creator of a table. Changing the load unit setting of an SAP-delivered table technically represents a modification.

Source: SAP Note 2973243

Q: Is the NSE attribute lost if we clean-up a table?

A: If you delete or truncate a table which was activated for NSE, the attribute remains afterwards. If you drop a table and recreate it without explicit page loadable attribute, NSE is not activated for this table by default.

 


Q: Do we can use it with scale-out?

A: Yes, since SPS05 starting with Revision 50. My personal recommendation is to use Rev. 56 and higher due to some non critical known issues.

 


Q: We are already using data aging. Do we need to switch?

A: Data aging is a similar concept using the page attribute but it is managed by the application. You don’t have to switch because it is still supported, but my recommendation is to use one feature to make it easier administrating them.

 


Q: What about LOB fields? Can we also use NSE to reduce the memory footprint of them?

A: Yes, but there were some restrictions and issues in the past. Please use the LOB fields only if you are on rev. 56+. When you upgrade to SAP_BASIS >= 7.53 the previous VARBINARY columns remain in place until an ABAP table conversion happens. New tables are created with a LOB data type.

As a consequence, many potentially large tables with large LRAW columns (<= 5000 byte) are completely loaded into memory (e.g. BALDAT, PCL2, SOC3, SWNCMONI, SWWCNTP0). In order to reduce the memory consumption, you can consider taking advantage of paged attributes / Native Storage Extension (so that unused column pages are evicted from memory in an LRU manner. Please also see SAP Notes 2816823 and 2898319 to safeguard NSE usage in table conversion events.

The data vector and the portion of LOBs stored in the LOB dictionary are paged. LOB data columns can be configured as either page loadable or column loadable by setting the LOAD UNIT value in the commands CREATE TABLE and ALTER TABLE.

However, memory usage for LOB data types is controlled by the Hybrid LOB configuration parameters independent of the LOAD UNIT configuration. While the LOAD UNIT can be configured to either COLUMN LOADABLE or PAGE LOADABLE for a LOB data column, it does not determine the memory usage for LOB data types.

Source: 2220627 – FAQ: SAP HANA LOBs

Q: What is the difference between heterogeneous and non-heterogeneous (homogeneous) partitioning?

A: Exactly two brackets in the syntax. Personally I think this was a bad step by SAP. Not regarding the design, more about the syntax. A meaningful signal word in the syntax would make it more easier than two brackets:

Standard / non-heterogeneous / homogeneous partitioning

ALTER TABLE T PARTITION BY RANGE (C1)  
(PARTITION 0 <= VALUES < 1000, PARTITION 1000 <= VALUES < 2000, PARTITION OTHERS );

heterogeneous with two brackets more

ALTER TABLE T PARTITION BY RANGE (C1)  
((PARTITION 0 <= VALUES < 1000 , PARTITION 1000 <= VALUES < 2000 , PARTITION OTHERS));
Source: SAP documentation

But there is more to say:

In SPS05 I have noticed a small performance impact of up to 10% if I used a heterogeneous partitioning design vs. non-heterogeneous.

Pro heterogeneous Contra heterogeneous
select any attribute for partitioning not only PK performance (must be tested)
supported for : Range, Range-range, Range-hash

Favorite issue when activating NSE for a non-heterogeneous partitioned table:

SAP DBTech JDBC: [7]: feature not supported: modifying partition property for non-HETEROGENEOUS partitioned tables

 


Q: How should I partition my table to take advantage of NSE on partition level?

A: This should be analyzed and tested carefully and depends highly on the usage of the table by your business. Mostly the current and the last year is interesting for most scenarios. This means anything older than 2 years can be paged out. Here you have to choose a partition attribute. Mostly there is a time range attribute like FYEAR or GJAHR. Here you can partition by year / quarter / month. This depends on the amount of data. The details will be covered in the next blog part.

 


Q: How can the performance impact be tested?

A: You should test your most critical business transactions and jobs in a sandbox or a QAS system. In the ideal case you should make benefit our of the capture&replay feature. Therefor a second system with the same size of the prod system is used. The workload will be captured on the prod. system and replayed on another one. Here NSE can be tested and the performance impact is visible.

Tip: If you use HSR for the prod system, test NSE on the secondary system during a maintenance window.

 


Q: Which are the important notes I have to check before using NSE?

A: There are some:

2915190 NSE: feature not supported: modifying partition property for non-HETEROGENEOUS partitioned table
2044468 FAQ: SAP HANA Partitioning
2799997 FAQ: SAP HANA Native Storage Extension (NSE)
2927591 SAP HANA Native Storage Extension 2.0 SPS 05 Functional Restrictions
2983008 Enable/Disable NSE (PAGE LOADABLE) for column tables of HANA statistics server
2816823 Use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP Business Suite powered by SAP HANA
2973243 Guidance for use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP Business Suite powered by SAP HANA

 


Q: Can I switch between ‘normal’ hot store and page loadable (NSE / warm store)

A: Yes, but you should test the duration. SAP recommends to do this for whole tables in a maintenance phase (=downtime).

My recommendation: You can do it for small tables also online in a low load phase of the system.

 


Q: Can I pin or preload the buffer cache?

A: No, not by default or special setting. It will be loaded after the first access of page loadable data into the cache.

 


Q: Which partitioning schemes are supported?

A:

heterogeneous non-heterogeneous 
Range Hash-range
Range-range
Range-hash

 


Q: How the syntax looks like to activate NSE?

A: check the documentation for details, but here are some samples:

--Activation on Table level (recommended offline):
ALTER TABLE "<table_name>" PAGE LOADABLE [CASCADE];

--Activation on Column level (recommended online):
ALTER TABLE <table_name> ALTER ("<column>" <data_type> PAGE LOADABLE); (blocking concurrent changes)
ALTER "<table_name>" ALTER ("<column>" ALTER PAGE LOADABLE); (non-blocking conversion)

--Activation on Partition level (recommended online):
ALTER TABLE <table_name> ALTER PARTITION <part_id> PAGE LOADABLE;

--Activation on Index level (recommended online):
ALTER "<schema_name>"."<index_name>" PAGE LOADABLE;

 


Q: How should I size my buffer cache?

A: Start with a factor of 6 and size it down while monitoring the buffer cache hit ratio. If you have 600GB paged out data to NSE. Use a buffer cache of 100GB. It depends highly on which data and the enduser selections. Monitor it over 1 or 2 weeks and adjust the buffer. Keep in mind that this is a frequent action and not a on-time task.

 


Q: Do I have to do anything after NSE is active for the candidates?

A: Yes, monitoring the buffer hit ratio, fill degree of partitions, performance of SQLs (may be new coding which was not analyzed) and creating new partitions for new months, quarter or years. It is an ongoing process and every new HANA revisions has its own new features 😉

 


Q: How to determine appropriate tables for NSE usage?

A: Tables with ‘paged preferred’ load unit in ABAP Data Dictionary (DDIC) (available starting SAP S/4HANA 2021):

Run report DD_REPAIR_TABT_DBPROPERTIES for all tables (‘check only’ option) to determine tables having a ABAP DDIC load unit ‘page preferred’ but have actual load unit ‘column’ in the HANA database.

For a quick glance on the load unit settings in DDIC, you may check field LOAD_UNIT in table DD09L: ‘P’= ‘page preferred’, ‘Q’ ‘page enforced’

Source: SAP Note 2973243

 


Q: Can we activating NSE during the migration towards to HANA?

A: You can define the partitioning design before you import the data to HANA in a special SUM DMO phase. You shouldn’t activate NSE because the import performance will be impacted. Design your partitions before importing and activate NSE afterwards.

 

Do you have any question which I have not convered yet? Just drop me a message or a comment.

Assigned Tags

      6 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Peter Wegner
      Peter Wegner

      Hello Jens,

      guter hilfreicher Blog zum Thema -

      Gruss

      Peter

      Author's profile photo Roland Kramer
      Roland Kramer

      you might want to check this Information as well ... 😉

      SAP (Sybase) IQ – the hidden treasure …

      Best Regards Roland

      Author's profile photo Yüksel Tiryaki
      Yüksel Tiryaki

      Thank you Jens for this great blog. To be honest I'm still suprised this amazing feature doesn't become a standard on the market.

      Author's profile photo Jens Gleichmann
      Jens Gleichmann
      Blog Post Author

      Hi Yüksel,

      thanks for reading and commenting. The feature is not well known and also complex, because you have to choose your candidates wise. But I'm with you for some tables there should be a standard option/recommendation to activate NSE for them. But it is hard to activate NSE in general for all fast growing tables.

      In BW/4HANA (since SPS04) there are some option with a framework to activate NSE for them.

      In S/4HANA2021 there are some more implementations to activate NSE easier. Just have a look at note 2973243.

      Yüksel, what is your wish or recommendation for an improvement by SAP?

       

      Regards,

      Jens

      Author's profile photo Kanimeni chandra_sekhar
      Kanimeni chandra_sekhar

      Hi Jens,

      Thanks sharing details,

      do you suggest first partition large grown table before enabling a specific column for NSE?

       

      Regards
      Chandra

       

      Author's profile photo Jens Gleichmann
      Jens Gleichmann
      Blog Post Author

      Hi Chandra,

      it is not necessary to partition a table if you only want activate NSE on column level. But as rule of thumb check my partitioning blog. If the table is already over 25GB or 1,4 billion records you should consider partitioning for performance reasons (e.g. delta merge, insert/update/delete, parallelity performance). If you expect that the table will reach this thresholds in the near future you should partition them now. The runtime of the partitioning depends on the size of the tables, record and column count.

      Keep in mind that there are some restriction regarding LOBs , if you on a HANA revision <= 56.

       

      Regard,

      Jens