Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
jgleichmann
Active Contributor

last updated: 2023-12-18


SAP NSE was introduced with HANA 2.0 SPS04 and based on a similar approach like data aging. Data aging based on a application level approach which has a side effect if you are using a lot of Z-coding. You have to use special BADI's to access the correct data. This means you have to adapt your coding if you are using it for Z-tables or using not SAP standard functions for accessing the data in your Z-coding. This was big hurdle to use this feature. But in this blog I don't want to talk about old stuff, because there are a lot of information already out there in form of blogs (SAP HANA Native Storage Extension: A Native Warm Data Tiering Solution by robert.waywell []; SAP HANA Native Storage Extension(NSE) – Increase HANA Data Capacity With Warm Storage by jeetkapase [],  NSE: Systematically Measure Memory Savings by danielsblog01 []), documentation, slides and technical guides. This means this blog series will not cover general stuff, but will focus on technical details and limitation which I have already encountered in my projects.








Currently recommended revisions to use NSE stable:

SPS05: 59.09+

SPS06: 67.01+

Please follow SAP note 3280077 / 3393307 if you are on a lower revision!

Check if the parameter async_prealloc_in_chunks_enabled is set to false, when you are on <=59.10! (issue 290394/312021) It improve the situation by avoiding unnecessary buffer allocations and deallocations in case of merge/optimize compression/CC.

 

 

We'll start with some common questions I get asked when the first workshop takes place:



    1. Does NSE cost additional licenses?

    2. Can I use NSE for BWoH?

    3. Which tables are perfect candidates for NSE?

    4. How does the NSE structure look like?

    5. Is there any holistic advice which table should be used for NSE in every system?

    6. When should you use the NSE advisor?

    7. Is it difficult to activate NSE?

    8. Are there any prerequisites on HANA level?

    9. How can we see which data is loaded into the buffer cache?

    10. Are there more views with useful details about NSE?

    11. Should we use archiving or NSE?

    12. Can we also use indexes for NSE?

    13. How big is the benefit of NSE?

    14. How can we estimate the savings?

    15. Can be NSE and pmem used together for a better TCO?

    16. Do I have to adjust my topology regarding system resources, HSR, backup etc.?

    17. Is NSE slower every time?

    18. How the delta merge is working with NSE?

    19. Is the NSE setting visible in the backend (ABAP dictionary)?

    20. Is the NSE attribute lost if we clean-up a table?

    21. Do we can use it with scale-out?

    22. We are already using data aging. Do we need to switch?

    23. How about LOB fields? Can we also use NSE to reduce the memory footprint of them?

    24. What is the difference between heterogeneous and non-heterogeneous (homogeneous) partitioning?

    25. How should I partition my table to take advantage of NSE on partition level?

    26. How can the performance impact be tested?

    27. Which are the important notes I have to check before using NSE?

    28. Can I switch between 'normal' hot store and page loadable (NSE / warm store)

    29. Can I pin or preload the buffer cache?

    30. Which partitioning schemes are supported?

    31. How the syntax looks like to activate NSE?

    32. How should I size my buffer cache?

    33. Do I have to do anything after NSE is active for the candidates?

    34. How to determine appropriate tables for NSE usage?

    35. Can we activating NSE during the migration towards to HANA

    36. Is there a automatically way to add new partitions besides dyn. range partitioning by record thresho...

    37. Is NSE working with the re-distribution feature of scale-out systems?

    38. Can we change the page loadable attribute right with the creation of the object?

    39. Are all type of column store tables supported?

    40. How the NSE Buffer Cache is working?

    41. Are there other SAP applications supported besides S/4, BSoH and BW/4?

    42. Can I use NSE with HANA Cloud?

    43. How NSE object are treated in the consistency check?

    44. Is there a known issue list for NSE bugs?

    45. Are there any default tables with NSE delivered by SAP?

    46. Is it possible to page out the audit log table?

    47. Where are NSE data stored and can I configure the location?

    48. Is the HANA sizing report considering NSE tables?

    49. Do I have to reimplement NSE after a system copy?

    50. Is there a recommended partition size for tables fully or partially in NSE?




 

Q: Does NSE cost additional licenses?

A: No, it is already included in your SAP HANA license (regardless if runtime or full use). It is covered by the phrase "data tiering option". But it still costs memory. The size of the buffer cache is also normal DRAM, which has to be licensed regarding the amount of memory. Please keep this in mind. The larger the buffer cache, the smaller the savings.

 




Q: Can I use NSE for BWoH?

A: No, it is not supported by SAP, but you can use it on your own risk. With BW/4HANA it is integrated and fully supported by SAP. AFAIK it is not on the roadmap of BWoH regarding my last talk with the product manager. You can use it with BWoH but you have to consider some tasks if you drop tables and recreate them in full load scenarios. This means additional administration effort. Use it carefully with estimation of your savings vs. effort.

 




Q: Which tables are perfect candidates for NSE?

A: Tables which have a high change rate and a low read rate. Changed data in the delta area will stay hot in DRAM. They have the same access speed compared to normal ColumnStore. Only the access speed of data stored in the main area is affected. But if the data is already loaded into the buffer cache, the speed is nearly the same as in the normal ColumnStore.







Q: How does the NSE structure look like?

A: The data structure look pretty much the same like the normal column store. Only at the persistence layer there is a difference how the dictionary is stored. Here is a little overhead compared to the normal CS structure.

 




Q: Is there any holistic advice which table should be used for NSE in every system?

A: It always depends on your scenario and your expectations. But to start using NSE you can use the ESS (embedded statistics server) tables. This tables are not part your business workload. They are used for the performance und workload data of the system. This means they are only used for read load when you have some issues and your DBA or SAP is analyzing the system.

Enable/Disable NSE (PAGE LOADABLE) for column tables of HANA statistics server

But there are several starting points like EDID4, ZARIX*, DBTABLOG or BALDAT.

 




Q: When should you use the NSE advisor?

A: This is just my opinion and may be other HANA specialists using the advisor in another way and had different experiences, but I'm not a fan of it. The advisor only covers a small timeframe and some special cases and scenarios will not be included. The advisor also has a little performance overhead which can be significant up to 20-30%. It should run at least for 7 days. As starting point or for simple scenarios it is a wonderful tool. For complex business cases I prefer the manual analyses to cover the needs of the business and special time frames like quarter or year closing. It can also not prioritize the SQLs e.g. SE16N selects or archiving runs which may falsifies the result.

There are also some known issues regarding the NSE advisor. At the time writing this blog there were no known issues in HANA revision >= 57.00 . If you want to use it use one of the latest revisions.

 




Q: Is it difficult to activate NSE?

A: No, the activation is not complex, but to determine the savings including the real benefit with consideration of the performance and the partitioning design in context of the growth / scaling is the supreme discipline. There is no simple methode - every system is used in another way. No setup is identically. No user behavior is exactly the same. Every NSE analyses starts nearly at zero.

 




Q: Are there any prerequisites on HANA level?

A: Yes, enough space on /hana/data , because the NSE persistence will use a little bit more disk space due to the dictionary metadata overhead - just about 10%. You need at least HANA 2.0 SPS04 and the new persistence format for the affected table.
SELECT * FROM M_CS_COLUMNS_PERSISTENCE WHERE PERSISTENCE_TYPE = 'VIRTUAL_FILE' or PERSISTENCE_TYPE = 'VIRTUAL_PAGED'

If there is an output please consult note 2927591 and follow the migration procedure.

 




Q: How can we see which data is loaded into the buffer cache?

A: This is a frequent changing snapshot, but if you want to see which data is currently inside the buffer cache use the view M_CS_ALL_COLUMNS and the COLUMN MEMORY_SIZE_IN_PAGE_LOADABLE_MAIN
SELECT * FROM M_CS_ALL_COLUMNS WHERE LOAD_UNIT = 'PAGE‘;

or use the SQL script HANA_NSE_BufferCache_Tables_2.00.040+

 




Q: Are there more views with useful details about NSE?

A: Yes, here we go:
SELECT * FROM M_CS_TABLES WHERE LOAD_UNIT = 'PAGE';

SELECT * FROM M_CS_COLUMNS_PERSISTENCE;

SELECT * from M_BUFFER_CACHE_STATISTICS;

SELECT * from M_BUFFER_CACHE_POOL_STATISTICS;

 




Q: Should we use archiving or NSE?

A: Both! NSE is for warm data which is still accessed and may be changed. Archiving is for cold data which shouldn't be changed any more. Use NSE for old data which still has to be accessed /changed and can not be archived or for tables which can not be archived (no archive objects).

 




Q: Can we also use indexes for NSE?

A: Yes, you can page out single indexes which are note frequently used. If you choose a whole table as candidate for NSE all indexes are paged out by default! Also if you choose a special partition for NSE, the part of the index for this partition is also paged out!

If you can not delete an index because it is a SAP standard index, you can use NSE to page this unused index out of the memory.




Q: How big is the benefit of NSE?

A: This depends on your system, the usage and the end user behavior. If the system includes more than the current year you can calculate with min. 15% savings but in average for OLTP systems (ERP) you can save 20-25%.

 




Q: How can we estimate the savings?

A: There are a few methods:

Peak sizing (full loaded table incl. indexes + working space)

Payload calculation (full loaded table incl. indexes)

Used payload calculation (only average loaded data)

In the end, you have to ask yourself from which perspective you want to act. The sizing or the usage perspective? The worst case or the normal business case scenario? Just do a before and after estimation of the memory size in detail degree of column level. Then subtract the buffer cache and you will get your savings.

Example:

You put the table EDID4 which has a full loaded size of 300GB into NSE. This means all columns and indexes are paged out. Normally only 20% are used. 100GB buffer cache are used.

Summary:

Peak sizing: 300GB *2 (payload + working space) – 100GB (buffer cache) = 500GB savings

Payload calculation: 300GB (payload) - 100GB (buffer cache) = 200GB savings

Used payload calculation: 60GB - 100GB (buffer cache) = no savings?

If you only need 60GB and it is the only table which you activated NSE for, you won't save really memory with this perspective. If your buffer cache hit ratio is always at 100% and the buffer cache won't grow to its max. size you should downsize it to save more memory.

 




Q: Can be NSE and pmem used together for a better TCO?

A: Yes, the main part, which normally placed in DRAM can be stored with pmem. The buffer cache heap allocator Pool/CS/BufferPage resides in the DRAM and can not be placed into the pmem. This means every data loaded into the buffer cache is placed into the DRAM. You can combine this two feature if you place one partition (hot) into pmem and one into NSE. The performance impact needs to be measured when your needs are impacted, but technically it works.

 




Q: Do I have to adjust my topology regarding system resources, HSR, backup etc.?

A: No, it is a seemless and smooth integration. All data is included in HSR and the backups. The needed memory for the buffer cache should be smaller than the sum of your paged out data. So, in the end you save twice the data in memory (payload) and the workings space for the sizing.

 




Q: Is NSE slower every time?

A: When the data is loaded into the buffer cache the access is nearly 1:1 compared the default setting ('COLUMN'). But in some scenarios I have observed that the partitioning design has speed up the selects and especially the inserts/updates/deletes.

If the data is not loaded into the buffer cache and it is thus the first access, you can add in average about 10-20% of database time. But this has to be tested carefully for every scenario.

 




Q: How the delta merge is working with NSE?

A: My tests resulting in a degradation of merge duration of factor 4 to 8. It depends on the status of the page attribute (loaded/unloaded into buffer cache) and the merge parameters. There was nearly no overhead regarding the memory. The buffer cache was not used for the delta merge procedure!

The delta part of the NSE objects are stays hot inside the DRAM. That is also the reason/criteria for the candidates: high write ratio - low read ratio.

All columns of the merged table must be loaded to merge it. If the buffer cache (=BC) is not big enough to hold the entire table, at least the biggest column must fit into the buffer cache.

Similar to the merge process also the CC (consistency check) and the optimize compression are acting. The biggest NSE column which was paged out (=NSE on) must fit into the buffer cache. This means if you have 10GB BC and the biggest column (per partition) is 15GB, you have to increase your BC.

 




Q: Is the NSE setting visible in the backend (ABAP dictionary)?

A: It depends on your release. It is fully integrated as table setting since S/4HANA 2020. It is supported for table conversions in the following releases:


Supported basis releases


S/4HANA 2020

In SAP S/4HANA 2020, the load unit for ABAP-managed HANA column store tables (that is, load unit setting on table level) can be defined via the ABAP Data Dictionary, in the technical settings for the table (transaction SE13). Changing the SAP-delivered setting for a given table technically represents a modification.

S/4HANA 2021

Like in SAP S/4HANA 2020 the load unit setting defined in the ABAP Data Dictionary is applied on table level. The load unit for columns or partitions can be set via database means, e.g. using the SAP HANA cockpit.

Starting with SAP S/4HANA 2021, there are two flavors (preferred or enforced) for specifying the load unit for ABAP-managed HANA column tables in the ABAP Data Dictionary (in the technical settings for the table, transaction SE13):

Preferred

  • default setting

  • Changing from one preferred load unit to another does not change the load unit on the database

  • During a table conversion, the actual NSE settings of the runtime object in the HANA database will be preserved


Enforced

  • Changing the enforced load unit results in a corresponding change on the database

  • The ABAP DDIC consistency check takes the load unit into account. Different values for the load unit in the DDIC and on the database result in tables that are inconsistent from the DDIC point of view.


The 'Preferred' flavors for the load unit setting in the ABAP DDIC offer the possibility for a database administrator or a tool like the SAP HANA Cockpit to overwrite the DDIC settings without making the table inconsistent. Please see the following section for the ABAP tool to check and (if required push-down) a preferred load unit setting to the HANA database.

Note for developers: In general, most of the tables should use one of the 'Preferred' settings for the load unit. Only if you are sure that the table should keep a specific load unit in all relevant scenarios (on premise, cloud, ...), you should use the Enforced load unit settings. The load unit setting in the ABAP Data Dictionary is chosen by the owner / creator of a table. Changing the load unit setting of an SAP-delivered table technically represents a modification.
Source: SAP Note 2973243





Q: Is the NSE attribute lost if we clean-up a table?

A: If you delete or truncate a table which was activated for NSE, the attribute remains afterwards. If you drop a table and recreate it without explicit page loadable attribute, NSE is not activated for this table by default.

 




Q: Do we can use it with scale-out?

A: Yes, since SPS05 starting with Revision 50. My personal recommendation is to use Rev. 56 and higher due to some non critical known issues.

But we have to differ some products as this is not fully stated out in all notes:





  • Data Tiering Optimization (DTO) supports NSE as warm-data store (as alternative to SAP HANA Extension Node)

  • SAP BW/4HANA on SAP HANA 2 SPS04 and SAP HANA 2 SPS05: only scale-up

  • SAP BW/4HANA (as from BW/4HANA 2021 SP04) on SAP HANA 2 SPS06 in scale-up and scale-out

  • SAP Business Warehouse on SAP HANA (BWoH) doesn’t support NSE in scale-up or scale-out

  • S/4HANA scale-out is supported since SPS05










Q: We are already using data aging. Do we need to switch?

A: Data aging is a similar concept using the page attribute but it is managed by the application. You don't have to switch because it is still supported, but my recommendation is to use one feature to make it easier administrating them.

 




Q: How about LOB fields? Can we also use NSE to reduce the memory footprint of them?

A: Yes, but there were some restrictions and issues in the past. Please use the LOB fields only if you are on rev. 56+. When you upgrade to SAP_BASIS >= 7.53 the previous VARBINARY columns remain in place until an ABAP table conversion happens. New tables are created with a LOB data type.

As a consequence, many potentially large tables with large LRAW columns (<= 5000 byte) are completely loaded into memory (e.g. BALDAT, PCL2, SOC3, SWNCMONI, SWWCNTP0). In order to reduce the memory consumption, you can consider taking advantage of paged attributes / Native Storage Extension (so that unused column pages are evicted from memory in an LRU manner. Please also see SAP Notes 2816823 and 2898319 to safeguard NSE usage in table conversion events.

If there are blocked savepoints during "ALTER TABLE ... LOG REORGANIZE" or changing memory thresholds for LOBs involving paged /NSE columns 

Workaround: convert LOB columns to column loadable, then alter memory threshold, and then convert back to page loadable. For more details have a look at my blog: details about HANA LOBs.








Attention

The data vector and the portion of LOBs stored in the LOB dictionary are paged. LOB data columns can be configured as either page loadable or column loadable by setting the LOAD UNIT value in the commands CREATE TABLE and ALTER TABLE.

However, memory usage for LOB data types is controlled by the Hybrid LOB configuration parameters independent of the LOAD UNIT configuration. While the LOAD UNIT can be configured to either COLUMN LOADABLE or PAGE LOADABLE for a LOB data column, it does not determine the memory usage for LOB data types.

Means the activated LOBs for NSE are placed into the LOB page cache and not into the buffer cache.

Conclusion: It only makes sense to page out LOB columns if they are small (default definition < 1KB) and stored as in memory data.



Source: 2220627 - FAQ: SAP HANA LOBs





Q: What is the difference between heterogeneous and non-heterogeneous (homogeneous) partitioning?

A: Exactly two brackets in the syntax. Personally I think this was a bad step by SAP. Not regarding the design, more about the syntax. A meaningful signal word in the syntax would make it more easier than two brackets:

Standard / non-heterogeneous / homogeneous partitioning
ALTER TABLE T PARTITION BY RANGE (C1)  
(PARTITION 0 <= VALUES < 1000, PARTITION 1000 <= VALUES < 2000, PARTITION OTHERS );

heterogeneous with two brackets more
ALTER TABLE T PARTITION BY RANGE (C1)  
((PARTITION 0 <= VALUES < 1000 , PARTITION 1000 <= VALUES < 2000 , PARTITION OTHERS));

Source: SAP documentation

But there is more to say:

In SPS05 I have noticed a small performance impact of up to 10% if I used a heterogeneous partitioning design vs. non-heterogeneous.















Pro heterogeneous Contra heterogeneous
select any attribute for partitioning not only PK performance (must be tested)
supported for : Range, Range-range, Range-hash no Range partitioning on first level (incl. dyn. range partitioning)

Favorite issue when activating NSE for a non-heterogeneous partitioned table:







SAP DBTech JDBC: [7]: feature not supported: modifying partition property for non-HETEROGENEOUS partitioned tables


or







Could not execute 'ALTER TABLE "<schema_name>"."<table_name" ALTER PARTITION <number_of_partitions> PAGE LOADABLE'SAP DBTech JDBC: [7]: 
feature not supported:
The table is partitioned with a time selection partitioning. For this kind of partitioning it is not allowed to change loading type.


 




Q: How should I partition my table to take advantage of NSE on partition level?

A: This should be analyzed and tested carefully and depends highly on the usage of the table by your business. Mostly the current and the last year is interesting for most scenarios. This means anything older than 2 years can be paged out. Here you have to choose a partition attribute. Mostly there is a time range attribute like FYEAR or GJAHR. Here you can partition by year / quarter / month. This depends on the amount of data. The details will be covered in the next blog part.

 




Q: How can the performance impact be tested?

A: You should test your most critical business transactions and jobs in a sandbox or a QAS system. In the ideal case you should make benefit our of the capture&replay feature. Therefor a second system with the same size of the prod system is used. The workload will be captured on the prod. system and replayed on another one. Here NSE can be tested and the performance impact is visible.






Tip: If you use HSR for the prod system, test NSE on the secondary system during a maintenance window.

 




Q: Which are the important notes I have to check before using NSE?

A: There are some:







































2915190 NSE: feature not supported: modifying partition property for non-HETEROGENEOUS partitioned table
2044468 FAQ: SAP HANA Partitioning
2799997 FAQ: SAP HANA Native Storage Extension (NSE)
2927591 SAP HANA Native Storage Extension 2.0 SPS 05 Functional Restrictions
3123259 SAP HANA Native Storage Extension 2.0 SPS 06 Functional Restrictions
3320095 SAP HANA Native Storage Extension 2.0 SPS 07 Functional Restrictions
2983008 Enable/Disable NSE (PAGE LOADABLE) for column tables of HANA statistics server
2816823 Use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP Business Suite powered by SAP HANA
2973243 Guidance for use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP Business Suite powered by SAP HANA

 




Q: Can I switch between 'normal' hot store and page loadable (NSE / warm store)

A: Yes, but you should test the duration. SAP recommends to do this for whole tables in a maintenance phase (=downtime).

My recommendation: You can do it for small tables also online in a low load phase of the system.

 




Q: Can I pin or preload the buffer cache?

A: No, not by default or special setting. It will be loaded after the first access of page loadable data into the cache. Pinning doesn't make any sense. If you want it pinned save it as column loadable.

 




Q: Which partitioning schemes are supported?

A:



















heterogeneous non-heterogeneous 
Range Hash-range
Range-range
Range-hash

 
# heterogeneous RANGE partitioning
ALTER TABLE <table_name> partition by RANGE (<Column>) ((PARTITION 19700101 <= VALUES < 20100101, PARTITION 20100101 <= VALUES < 20160101, PARTITION 20160101 <= VALUES < 20200101, PARTITION 20200101 <= VALUES < 20210101, PARTITION 20210101 <= VALUES < 20220101, PARTITION 20220101 <= VALUES < 20230101, PARTITION OTHERS))

# heterogeneous RANGE-RANGE partitioning
ALTER TABLE <table_name> partition by range (<Column>) ((PARTITION 100 <= VALUES < 200, PARTITION 200 <= VALUES < 400, PARTITION OTHERS)
SUBPARTITION by RANGE (<Column>) (PARTITION 19700101 <= VALUES < 20100101, PARTITION 20100101 <= VALUES < 20160101, PARTITION 20160101 <= VALUES < 20200101, PARTITION 20200101 <= VALUES < 20210101, PARTITION 20210101 <= VALUES < 20220101, PARTITION 20220101 <= VALUES < 20230101, PARTITION OTHERS))

# heterogeneous RANGE-HASH partitioning
ALTER TABLE <table_name> partition by range (<Column>) ((PARTITION 100 <= VALUES < 200, PARTITION 200 <= VALUES < 400, PARTITION OTHERS)
SUBPARTITION by HASH (<Column>) PARTITIONS 6)


# non-heterogeneous / homogeneous HASH-RANGE partitioning
ALTER TABLE <table_name> partition by HASH (<Column>) PARTITIONS 6,
RANGE (<Column>) (PARTITION 19700101 <= VALUES < 20100101, PARTITION 20100101 <= VALUES < 20160101, PARTITION 20160101 <= VALUES < 20200101, PARTITION 20200101 <= VALUES < 20210101, PARTITION 20210101 <= VALUES < 20220101, PARTITION 20220101 <= VALUES < 20230101, PARTITION OTHERS)

Please verify that you are using the correct documentation for your release. The syntax differences (comma, brackets, subpartition etc.) are small but important!

Documentation:

 

 




Q: How the syntax looks like to activate NSE?

A: check the documentation for details, but here are some samples:
--Activation on Table level (recommended offline):
ALTER TABLE "<table_name>" PAGE LOADABLE [CASCADE];

--Activation on Column level (recommended online):
ALTER TABLE <table_name> ALTER ("<column>" <data_type> PAGE LOADABLE); (blocking concurrent changes)
ALTER "<table_name>" ALTER ("<column>" ALTER PAGE LOADABLE); (non-blocking conversion)

--Activation on Partition level (recommended online):
ALTER TABLE <table_name> ALTER PARTITION <part_id> PAGE LOADABLE;

--Activation on Index level (recommended online):
ALTER "<schema_name>"."<index_name>" PAGE LOADABLE;

 






Attention: Starting >= 2.00.059.01 (60+) the in-memory processing is done prior acquiring the consistent change lock in the context of the table page cascade conversion process.

 




Q: How should I size my buffer cache?

A: Start with a factor of 6 and size it down while monitoring the buffer cache hit ratio. If you have 600GB paged out data to NSE. Use a buffer cache of 100GB. It depends highly on which data and the enduser selections. Monitor it over 1 or 2 weeks and adjust the buffer. Keep in mind that this is a frequent action and not a one-time task.

Default CS buffer cache size is 10% of PAL








indexserver.ini -> [buffer_cache_cs] -> max_size (in MB)

indexserver.ini -> [buffer_cache_cs] -> max_size_rel (in %) [default: 0 which means 10]

indexserver.ini -> [buffer_cache_cs] -> unload_threshold [default: 80] (SPS07)

indexserver.ini -> [buffer_cache_cs] -> async_prealloc_in_chunks_enabled = false

If you use the relative parameter your cache will automatically adjusted if you resize your server memory. Due to this my recommendation is to set it to a fix size, because in all other cases it will be forgotten and the cache is too big or too small. Be aware that if the cache is fully used, it won't be decreased/unloaded even if the data is not accessed. You can decrease the parameter which will free the memory. With SPS07 this features is delivered and addresses this gap with the NSE elastic buffer cache:



© 2023 SAP SE or an SAP affiliate company. All rights reserved.

There is also a new alert regarding out-of-buffer events:













Alert ID Name Description
133 Check out-of-buffers event for a buffer cache Checks whether a buffer cache is sufficiently configured. An out-of-buffers event indicates that the affected buffer cache is insufficiently configured for the current workload. It should be resolved, otherwise queries may fail.





Q: Do I have to do anything after NSE is active for the candidates?

A: Yes, monitoring the buffer hit ratio, fill degree of partitions, performance of SQLs (may be new coding which was not analyzed) and creating new partitions for new months, quarter or years. It is an ongoing process and every new HANA revisions has its own new features 😉

 




Q: How to determine appropriate tables for NSE usage?

A: Tables with 'paged preferred' load unit in ABAP Data Dictionary (DDIC) (available starting SAP S/4HANA 2021😞

Run report DD_REPAIR_TABT_DBPROPERTIES for all tables ('check only' option) to determine tables having a ABAP DDIC load unit 'page preferred' but have actual load unit 'column' in the HANA database.

For a quick glance on the load unit settings in DDIC, you may check field LOAD_UNIT in table DD09L: 'P'= 'page preferred', 'Q' 'page enforced'

Source: SAP Note 2973243

 




Q: Can we enable NSE during migration to HANA?

A: You can define the partitioning design before you import the data to HANA in a special SUM DMO phase. You shouldn't activate NSE because the import performance can be impacted. Design your partitions before importing and activate NSE afterwards.

 




Q: Is there a automatically way to add new partitions besides dyn. range partitioning by record threshold?

A: Starting with SPS06 there is a new interval option for range partitions:
#quarterly new partition
ALTER TABLE T PARTITION OTHERS DYNAMIC INTERVAL 3 MONTH;
#half yearly new partition
ALTER TABLE T PARTITION OTHERS DYNAMIC INTERVAL 6 MONTH;
#after 2 years new partition
ALTER TABLE T PARTITION OTHERS DYNAMIC INTERVAL 2 YEAR;

Documentation

 




Q: Is NSE working with the re-distribution feature of scale-out systems?

A: Yes, it is supported starting with HANA SPS06.

 




Q: Can we change the page loadable attribute right with the creation of the object?

A: Yes, it is possible during the creation to specify the load unit on column, table and partition level. It is not possible to change the partitioning design AND change the load unit at once. You have to do it in a 2-step approach. Starting with SPS06 this is possible.

 




Q: Are all type of column store tables supported?

A: No!



























Type NSE possible?
normal CS
yes
no logging no
disable delta log yes
local temp no
global temp no

 




Q: How the NSE Buffer Cache is working?

A: The first access of a page loadable table will transfer the data from disk to the cache. It will remain until the LRU will evict it due to no access activity. But it will fill up the cache till its limit until the eviction kicks in. So, please choose the size of the cache wisely. The size can be shrinked by modifying the cache size. If a table will be dropped and is has a page load able attribute, it will be kicked out of the cache and the size drops!

 




Q: Are there other SAP applications supported besides S/4, BSoH and BW/4?

A: Yes:































Product Note






Financial Products Subledger















SAP Enterprise Thread Detection (ETD) 2.0 SP4 PL 00+



















SAP Customer Activity Repository (CAR) 5.0 FPS02+



















SAP Manufacturing Execution (ME) 15.5+



















SAP Customer Profitability Analytics (CPA) 2.6.0+




















HANA native applications














 





There could be more, but that's my current known state. If you miss any product just drop me a message or comment. There have also been some tests with the SolMan, which were officially presented by SAP, but currently there is no official support statement for it.

 




Q: Can I use NSE with HANA Cloud?

A: Yes, but there are some limitation due to the different codeline.















Feature HANA Cloud






NSE Advisor










not available (planned)










Support for Statistics Server Tables










Page-loadable (default; configurable with parameter: enable_page_loadable)





 




Q: How NSE object are treated in the consistency check?

A: Objects will be loaded fully and then checked. This means the buffer cache must suffer the request of min. the size of the largest table / partition. For instance your buffer cache is 50GB and your largest table is 60GB you will receive an error "ERROR_CHECK_FAILED". The buffer cannot be overprovisioned.






checking column table <schema>.<table>. Caught exception: exception 3020054: CS cannot provide requested memory as it's limit of <mb> MByte is exceeded.

 







indexserver.ini -> [table_consistency_check] -> enable_automatic_unload = true


There is a parameter for a early unload, but this is not valid for NSE tables. The issue was fixed with Rev. 55 and was improved with 59.






This means min. recommendation SPS05: 59.00 and SPS06 Rev. 64

 




Q: Is there a known issue list for NSE bugs?

A: No, there is no official list. But I try to list them here:



















































































Note/Issue Description fixed in Rev.
3304807 System Hang due to Optimize Compression on Page-Loadable Tables Blocking the Savepoint 59.05 / 65 / 70

3280077



Indexserver Crash at UnifiedTable::NBitDataAccessor64::mget()


59.07 / 66 / 70
3086101 High CPU and Blocked Savepoint When Conversion to NSE is Triggered 55
3081357 SAP HANA NSE Advisor - Signal 11 Error While Querying M_CS_NSE_ADVISOR With Data Collection Open for a Short Time 57
2994330

Slow Table-Wise LOB Garbage Collection for LOB Columns Defined as Page Loadable Attributes


57
3251660

Suboptimal Performance of SQL Command "ALTER TABLE ADD COLUMN [...]"


59.04
3256149

NSE Table re-partition is hanging due to a blocked savepoint


-


303564

3393307
savepoint blocking situation during a delta merge when an inverted index on a paged attribute was updated 59.09 / 67.01 / 70
309623
Fixed an issue that could result in increasing data storage of a page loadable type column store table. 59.10 / 67.03 / 73
309235 extended savepoint runtime after optimizing compression on a table with paged attributes. 59.10 / 67.03 / 73
272590 savepoint being blocked for up to a few minutes 59.01 / 60.00 / 70
290394 blocking the savepoint with "PageAccess::PageControlBlock::destroyResource()" 59.05 / 64 / 70
3364504

Long Savepoint Blocking Phase Caused by Optimized Compression




WA: 59.05 / 60 / 70

Solution: 73
312021

Free list management in context of NSE




WA: 59.05

59.11 / 64

3401911



Savepoint is Blocked by Paged Inverted Index Creation in case of Full NSE Buffer Cache


59.09 / 67.01 / 70

 




Q: Are there any default tables with NSE delivered by SAP?

A: Yes, since S/4HANA 2021 there are over 150 objects delivered during the EU_IMPORT phase from SAP. The most common once are dictionary tables like DD03L, DOKTL, DYNPSOURCE, TADIR etc. This means every customer with S/4 2021+ will now use NSE. So, please take care of your configuration.

S/4 tables after update/conversion:


 

Structure file during import:


 




Q: Is it possible to page out the audit log table?

A: It is not possible before HANA 2.0 SPS07. Starting with HANA 2.0 Rev. 70 it is possible to page out the table "CS_AUDIT_LOG_" with the following command:
ALTER SYSTEM ALTER AUDIT LOG PAGE LOADABLE;

 




Q: Where are NSE data stored and can I configure the location?

A: By default, NSE data is stored in the configured HANA data location (normally: /hana/data/). Currently there is no possibility to configure a special location apart from the HANA data location. The data is handled like the normal column store data in context of I/O , HSR and backup&recovery.

 




Q:  Is the HANA sizing report considering NSE tables?

A: Yes, since version 91 of /SDF/HDB_SIZING it is considering the default tables in S/4HANA and the suggestions by SAP.

 




Q:  Do I have to reimplement NSE after a system copy?

A: No, NSE attributes will reside the same as in the source. This means all NSE settings which were applied to PRD will also be restored to the QAS system via B&R or HSR. You can set the buffer cache a little bit more aggressive (=smaller) to save more memory.

 




Q: Is there a recommended partition size for tables fully or partially in NSE?

A: When NSE is used for a table (regardless of whether the full table is page loadable, or only selected columns or partitions), comparatively small partitions should be designed in order to optimize IO-intensive operations such as delta merges. Because of the focus on IO, the relevant measurable is not the number of records per partition, but rather the partition size in bytes. When using NSE with a table, partition sizes should be smaller than 10 GB. This also means that if the size of a table that is fully or partially (selected columns) in NSE reaches 10 GB, this table should be partitioned.

 




Do you have any question which I have not covered yet? Just drop me a message or a comment.
18 Comments
Labels in this area