SAP HANA Native Storage Extension (NSE) capabilities – Key Findings
This blog post provides information on what it takes to implement NSE capabilities on SAP Business Suite on HANA based on my experience utilizing it.
What to expect?
- Use Cases for implementing HANA NSE
- Brief intro on NSE
- Phases of implementing NSE
- Pre and Post implementation findings
- Table Partitioning Changes
- Key Learnings
1. Use Cases of Implementing HANA NSE
In this digital world data has grown leaps and bounds and application are built to process, consume, and store these data. In the NextGen SAP S/4 HANA world, an effective data management strategy plays a significant role in the successful run of the business. Various data management tools like SAP Archiving, SAP ILM, and SAP DART are available in the SAP Marketplace to effectively manage data at the application level.
In the world of SAP HANA, memory plays a significant role in overall system performance. It’s important to efficiently manage data in-memory so it will not explode. For this purpose, SAP has introduced the Native Storage Extension (NSE) capability.
Implementing SAP HANA Native Storage Extension is to reduce the in-memory data footprint of selected tables on a suite on HANA or SAP S/4HANA. HANA NSE is useful for organizations who want to efficiently manage the cost of acquiring additional HANA memory for their organic growth. HANA NSE is deemed useful especially for meeting the memory requirements of the non-prod environments of an organization as it keeps the hardware costs in check.
The effort of activating NSE on selected tables is expected to be significantly lower compared to implementing data archiving or data aging as NSE will not change the visibility of the data. Therefore, NSE usage will not change the results provided in SAP GUI business transactions or Fiori applications. However, the effects of NSE on performance need to be tested carefully. HANA NSE is available beginning with SAP HANA 2.0 SPS 04.
2. Brief Intro on NSE
SAP HANA NSE is a general purpose, built-in warm data store in SAP HANA that lets you manage data accessed infrequently without fully loading it into memory. When using NSE, we mark this infrequently accessed data as page loadable; this data does not get automatically loaded into memory. This page loadable data is loaded into memory in granular units of pages as required for query processing. Hence, page loadable data resides partly in-memory and is called on-demand from database disk storage. NSE can define any table partition/column/index as page loadable.
More information about SAP HANA NSE can be found in the official SAP Guide here.
3. Phases of Implementing NSE
I divided my POC for NSE implementation on SAP Business Suite in HANA into the below phases.
A. Manage NSE Buffer Cache Capacity
The buffer cache works with the existing SAP HANA memory manager to implement a page management policy. The buffer cache provides page memory from the dynamically managed memory pools.
The buffer cache is enabled by default. You need not perform any tasks to enable the buffer cache. The default size of the buffer cache is 10% of the HANA memory, which can be changed by DB Admin
We can dynamically change the upper limit to buffer cache size by altering max_size and max_size_rel parameters.
We can monitor Buffer cache utilization form SAP HANA views – M_BUFFER_CACHE_STATISTICS and M_BUFFER_CACHE_POOL_STATISTICS
B. Enable NSE Advisor
SAP HANA NSE Advisor can be used to receive suggestions regarding load units for tables, partitions, or columns according to how frequently they are accessed.
Turning ON NSE Advisor
- Go to Tenant database in HANA Cockpit and click on Recommendations.
- Next, click on Configure.
- Next, select the scan period from different options.
- Next, activate the status of NSE Advisor.
NSE Advisor could be activated for different scan density (of 2 hours, 1 day, 7 days, and 30 days) from HANA Cockpit or SQL Query level and we can then get the recommendations from the M_CS_NSE_ADVISOR view.
NSE advisor provides the list of tables, partitions, and columns with suggestions of whether they should be page or column-loadable.
Changing the Load Unit for Table and Columns
SAP NSE can be activated through SQL commands on the required tables, partitions, columns or indexes.
Changing the Load Unit in Alter command alters the load unit of the object.
C. Finalized NSE candidates for my system
Steps to be performed to finalize NSE candidates are stated below.
The following are the finalized NSE candidates for my POC on SAP Business Suite on HANA. A CRM system was used in this scenario.
|Tables||Columns||Index (For internal column: $trexexternalkey)|
CLUSTD( from Table BALDAT).
Note: There are additional Columns from NSE Advisor recommendations which could be Page Loadable, but as part of the learning process, I have selected only one candidate.
‘ADRU~0 (From Table ADRU)
For more information, follow the below SAP Notes:
•2799997- FAQ: SAP HANA Native Storage Extension (NSE)
•1986747- How-To: Analyzing Internal Columns in SAP HANA Column Store.
4. Pre and Post NSE Implementation Findings
This section gives an on the pre and post NSE implementation findings regarding Table/Column and Index level changes.
Table level NSE changes (21 finalized tables): Around 53.5 GB was reduced from main memory.
Buffer Cache Pre-HANA DB Restart:
Buffer Cache Post-HANA DB Restart:
Memory Utilization Pre-NSE changes:
Memory Utilization Post-NSE changes and HANA DB Restart:
Column and Index level NSE changes (two finalized candidates): Around 12 GB was reduced from main memory.
Buffer Cache Pre-HANA DB Restart:
Buffer Cache Post-HANA DB Restart:
5.Table Partitioning Changes
This section details how table partitioning columns were determined.
For Table SRREROLES: range partition using timestamp column UTCTIME is the best choice, but UTCTIME column is not a primary key. I decided to do Multi-level Hash-Range partitioning.
For HASH partitioning, the most commonly accessed/queried column was selected using below exercise.
Query details from M_SQL_PLAN_CACHE for table “SRRELROLES” using SQL Console could be used to determine a common pattern and get the details of columns through which table is mostly accessed. Below is the query and result.
select statement_string, execution_count, total_execution_timefrom m_sql_plan_cachewhere statement_stringlike ‘%SRREROLES%’ order by total_execution_timedesc
Per the query results from M_SQL_PLAN_CACHE for table “SRRELROLES”, I analyzed the ‘where’ clause and found a common pattern. It could be best assumed that table SRRELROLES is accessed mostly via CLIENT, OBJKEY, and LOGSYS (excluding OBJTYPE which might not be of use for us here.)
To partition “SRREROLES” for best use of NSE, the solution would be to implement hash-range multi-level partitioning for column OBJKEY (level 1) and column UTCTIME (level 2). We can choose the number of partitions at each level; here I went for 3X3 number of partitions with below command.
ALTER TABLE SCHEMANAME.SRRELROLES PARTITION BY HASH (OBJKEY) PARTITIONS 3, RANGE (UTCTIME) (PARTITION 20181201235959 <= VALUES < 20211201235959, PARTITION 20011201235959 <= VALUES < 20181201235959, PARTITION OTHERS);
Similarly, for Table CRM_JCDS, the solution I reached is to implement hash-range multi-level partitioning for column OBJNR (level 1) and column UDATE (level 2).
For more information on HANA Partition, refer to below SAP Notes:
2418299-SAP HANA: Partitioning Best Practices / Examples for SAP Tables.
2044468 -FAQ: SAP HANA Partitioning.
For Table SRREROLES, it was decided data older than 3 years would be offloaded.
Range Partitioning with Column UTCTIME didn’t work as UTCTIME column is not a primary key, hence I did Multi-Level Hash-Range partitioning.
Statement: ALTER TABLE SCHEMANAME.SRRELROLES PARTITION BY HASH (OBJKEY) PARTITIONS 3, RANGE (UTCTIME) (PARTITION 20181201235959 <= VALUES < 20211201235959, PARTITION 20011201235959 <= VALUES < 20181201235959, PARTITION OTHERS);
Result: successfully executed in 7:10.782 minutes (server processing time: 7:10.751 minutes) -Rows Affected: 0
Unload the Table from memory:
Make the sub-partitions with older data range data as Page Loadable:
ALTER TABLE SCHEMANAME.SRRELROLES ALTER PARTITION 2 PAGE LOADABLE
ALTER TABLE SCHEMANAME.SRRELROLES ALTER PARTITION 5 PAGE LOADABLE
ALTER TABLE SCHEMANAME.SRRELROLES ALTER PARTITION 8 PAGE LOADABLE
For Table CRM_JCDS, it was decided data older than 3 years would be offloaded.
Range Partitioning with Column UDATE didn’t worked as UDATE column is not a primary key, hence I did Multi-Level Hash-Range partitioning.
Statement: ALTER TABLE SCHEMANAME.CRM_JCDS PARTITION BY HASH (OBJNR) PARTITIONS 3, RANGE (UDATE) (PARTITION 20181201 <= VALUES < 20211201, PARTITION 20011201 <= VALUES < 20181201, PARTITION OTHERS);
Result: successfully executed in 9:19.667 minutes (server processing time: 9:19.637 minutes) -Rows Affected: 0
Table Sub-partitions: 2,5 and 8 were offloaded and made PAGE Loadable
Statement: ‘ALTER TABLE SCHEMANAME.CRM_JCDS ALTER PARTITION 2 PAGE LOADABLE’
Result: successfully executed in 1:24.713 minutes (server processing time: 1:24.678 minutes) -Rows Affected: 0
Statement: ‘ALTER TABLE SCHEMANAME.CRM_JCDS ALTER PARTITION 5 PAGE LOADABLE’
Result: successfully executed in 46 ms262 μs (server processing time: 11 ms109 μs) -Rows Affected: 0
For Table CRM_JCDS, it was decided data older than 3 years will be offloaded
Range Partitioning with Column UDATE didn’t worked as UDATE column is not primary key, hence I decided to do Multi-level Hash-Range partitioning.
Statement: ‘ALTER TABLE SCHEMANAME.CRM_JCDS ALTER PARTITION 8 PAGE LOADABLE’
Result: Successfully executed in 45 ms470 μs (server processing time: 10 ms408 μs) -Rows Affected: 0
For Table Z*: It was decided data for range zero to 0000000315695160 will be offloaded.
Single level range partitioning with column CHANNEL_UID was chosen, as CHANNEL_UID column is a primary key.
Statement: ALTER TABLE SCHEMANAME.Z* PARTITION BY RANGE (CHANNEL_UID) ((PARTITION 0 <= VALUES < 0000000315695160, PARTITION OTHERS));
Result: successfully executed in 5:36.141 minutes (server processing time: 5:36.106 minutes) -Rows Affected: 0
For Table Z*, it was decided data for range zero to 0000000315695160 would be offloaded.
A single level Range Partitioning with Column CHANNEL_UID was chosen, as CHANNEL_UID column is a primary key.
Table Partition: 1 was offloaded and made PAGE Loadable
Statement: ‘ALTER TABLE SCHEMANAME.Z* ALTER PARTITION 1 PAGE LOADABLE’
Result: successfully executed in 45.667 seconds (server processing time: 45.629 seconds) -Rows Affected: 0
6. Key learnings:
- No separate license is required to activate NSE in HANA DB.
- NSE exercises should not be thought of as a replacement for other data archiving solutions; in fact, NSE and archiving work in-tandem.
- NSE’s potential to offload data can range from 1/10th to one-third of data or more. However, wrong partitioning design or wrong candidate selection for NSE could lead to frequent page-loads and performance issues.
- Wrong sizing for buffer cache could also hit performance from too many loads and unloads.
- Frequent monitoring of buffer cache and cache hit ratio is a must.
- Proper system analysis is fundamental.
- Proper partitioning concept and monitoring is elementary.
I hope this post is a quick reference to enable NSE in your system. Gear up for NSE capabilities and check today to make the best use of the latest innovations offered by SAP.
Please leave a comment and your feedback on this blog as it will help me grow. Follow my profile for related content.
Additionally, follow the SAP S/4HANA topic page https://community.sap.com/topics/s4hana, post and answer questions about SAP S/4HANA https://answers.sap.com/tags/73554900100800000266, and read other posts on the topic https://blogs.sap.com/tags/73554900100800000266/.
Pranathi Varma Mudunuri
you said : "HANA NSE is deemed useful especially for meeting the memory requirements of the non-prod environments"
is it not an issue having different table setting between prod and non-prod systems ? I mean, if we are doing some performance tests on quality system where a table a page loadable, will it be representative of prod behavior if the same table is column loadable there.
In addition, I saw that, starting from SAP_BASIS 7.55, the ABAP data dictionary manages the “load unit” meaning it has to be set on the development system and then transported. So, the setting will be aligned on the full landscape anyway.
regarding the question on how NSE is handled in ABAP data dictionary:
Please check SAP Note 2973243 - Guidance for use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP Business Suite powered by SAP HANA section "How is NSE integrated in ABAP Data Dictionary (DDIC)?"
Please also pay special attention to the option of a 'preferred' load unit (introduced with SAP_BASIS 7.56) which does NOT mean that a transported load-unit will be applied automatically on the HANA DB. (Link to Documentation)
Thanks for the information, I didn't know it was possible to decide between forced or preferred for this DDIC setting.
NSE is mainly used to high table read scenario. But if there are heavy transactions, which writes data into HANA tables, NSE is not a better candidate to host tables.
Pranathi Varma Mudunuri Thanks for your detailed blog.
Maybe SAP Note 2973243 - Guidance for use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP Business Suite powered by SAP HANA can also be beneficial for your readers.