Skip to Content
Technical Articles
Author's profile photo Sri Krishna Chaitanya Kuppili

NSE Implementation Experience

NSE Implementation Experience

** All the screenshots are taken by me from one of our HANA implementations.

Introduction:

NSE is a fully functional warm data store on HANA database, which we can leverage to move less frequently accessed data without loading fully into memory. 

This is a very good feature, where we can avoid increasing server capacity and thereby control costs on hardware.

For example, one of our customers has a 2TB production instance on Azure and they have an average memory consumption of 1.7-1.8 TB. To increase hardware capacity, keeping similar CPU size, we need to take a larger VM of around 4TB, which is oversized according to the current growth rate of the database; so customers will end up paying more for a larger VM, even though it’s not being fully utilized. 

Example, pricing for Azure is as shown below.

To keep the VM size the same and to control memory usage, we have the following options.

  1. Reduce DB size by archiving – This is one of the standard procedures available irrespective of database type, however, this needs to consider the retention policy requirements to meet audit needs by customers.
  2. Data Aging – Data Aging is another good feature but it has certain limitations with reversal of aging, custom code adoption to access aged data.
  3. NSE – NSE is a very useful offering by SAP for HANA DB and it covers the drawbacks of Aging and Archiving. NSE can be reversed easily and no custom code adoption is required.
  4. Keeping the costs in consideration, we haven’t explored extension nodes.

So, we have picked NSE as an option to reduce the database memory utilization.

Pre-Implementation Analysis :

Before starting implementation of NSE, we need to identify the potential tables that can be moved to NSE and be offloaded from memory.

To find out the list of tables, we need to define the criteria to pick them. The preferred criteria is, to offload tables which are having a high amount of changes like INSERTs, DELETEs, UPDATEs etc. but very few READs. Also, as a best practice, we have decided to exclude business critical tables from this exercise.

We can use the following to find out the tables that satisfy the above criteria.

  1. ST10 Table Call Statistics
  2. HANA SQL HANA_Tables_TableStatistics_1.00.100+
  3. Top tables by Memory
  4. NSE Advisor

As per the definition by SAP, NSE Advisor is a tool to that determines the temperature of data, and uses rule-based (data temperature threshold, access pattern/frequency, data density) algorithms to derive these recommendations, in particular to identify hot and warm objects as candidates to be either page-loadable or column-loadable. 

NSE Advisor needs to be run in the Production System, for a few days to capture the table call statistics and provide recommendations. This can cause an overhead on the database, however we haven’t noticed a significant increase in CPU/Memory Consumption, so we activated NSE Advisor. 

Also note, NSE Advisor covers activity during the period for which it is enabled. It may not consider the time periods where certain tables are accessed/not accessed, when it is off. However, this gives a reasonable estimate, if you are have covered the key time periods.

In addition to NSE advisor, use the top tables list to support and justify your findings.

Note : Please review resource consumption on your HANA database in a quality system to understand the impact, before activating on Production. The key metrics to monitor are DB CPU, DB Memory Usage, HANA Threads Usage, Job Runtime etc.

Activating NSE Advisor

ALTER SYSTEM ALTER CONFIGURATION (‘indexserver.ini’,’system’) SET (‘cs_access_statistics’,’collection_enabled’) = ‘true’

WITH RECONFIGURE;

This will activate NSE with the default setting. We can configure the NSE Advisor to focus on tables which are larger than a specific size to avoid noise in the result set.

NSE Advisor Additional Configuration Parameters :

MIN_OBJECT_SIZE: Control the minimum object size (BYTE) for an object to be considered

for recommendation

ALTER SYSTEM ALTER CONFIGURATION (‘indexserver.ini’, ‘SYSTEM’) SET

(‘cs_nse_advisor’,’min_object_size’) = ‘YYY’

WITH RECONFIGURE; 

–default value 1048576 = 1 MiB

Replace YYY to a number in bytes to fix the minimum object size.

 

HOT_OBJECT_THRESHOLD_PERCENT: Controls the percentage of tables, according to row

scan count and size, to consider when calculating hot object threshold

COLD_OBJECT_THRESHOLD_PERCENT: Controls the percentage of tables, according to row

scan count and size, to consider when calculating cold object threshold

ALTER SYSTEM ALTER CONFIGURATION (‘indexserver.ini’, ‘system’) SET

(‘cs_nse_advisor’,’hot_object_threshold_rel’) = ‘YY’

WITH RECONFIGURE; –default value 10

ALTER SYSTEM ALTER CONFIGURATION (‘indexserver.ini’, ‘system’) SET

(‘cs_nse_advisor’,’cold_object_threshold_rel’) = ‘YY’

WITH RECONFIGURE; –default value 10

Note: The sum of HOT_OBJECT_THRESHOLD_PERCENT and COLD_OBJECT_THRESHOLD_PERCENT

must not exceed 100. 

Once NSE Advisor is activated, we can allow it to run for a few weeks covering most of the user activity periods.

Once it is run for a few weeks, turn it off using below SQL Statement : 

ALTER SYSTEM ALTER CONFIGURATION (‘indexserver.ini’,’system’) SET

(‘cs_access_statistics’,’collection_enabled’) = ‘false’

WITH RECONFIGURE;

Review the recommendations of NSE Advisor using the query:

SELECT * FROM M_CS_NSE_ADVISOR

NSE Advisor Results:

After reviewing the NSE Advisor results, we have decided to move the following tables to PAGE LOADABLE to offload from memory.

  1. ZARIX* archive infosystem tables
  2. CDHDR/CDPOS
  3. EDIDC/EDIDS

This gives us a savings of 250GB and with a potential savings of around 500GB.

Implementation:

As a first step, we moved ZARIX* tables to PAGE LOADABLE.

ALTER TABLE SAPABAP1.ZARIXBC1 PAGE LOADABLE CASCADE

ALTER TABLE SAPABAP1.ZARIXBC2 PAGE LOADABLE CASCADE

ALTER TABLE SAPABAP1.ZARIXBC3 PAGE LOADABLE CASCADE

After a week of monitoring the buffer cache, we moved the Change Documents tables to PAGE LOADABLE.

Before moving to PAGE LOADABLE, we have decided to take advantage of the partitioning and move all old historical data to PAGE LOADABLE and keep only the CURRENT partition in memory.

So, we partitioned CDHDR/CDPOS based on UDATE and CDPOS by CHANGENR.

ALTER TABLE SAPABAP1.CDHDR PARTITION BY

RANGE(UDATE)

( (PARTITION ‘20170101’ <= VALUES < ‘20180101’,

PARTITION ‘20180101’ <= VALUES < ‘20190101’,

PARTITION ‘20190101’ <= VALUES < ‘20200101’,

PARTITION ‘20200101’ <= VALUES < ‘20210101’,

PARTITION ‘20210101’ <= VALUES < ‘20220101’,

PARTITION OTHERS

))

ALTER TABLE SAPABAP1.CDPOS

  PARTITION BY

RANGE (CHANGENR)((

  PARTITION ‘0000000000’ <= VALUES < ‘0010000000’,

  PARTITION ‘0010000000’ <= VALUES < ‘0050000000’,

  PARTITION ‘0050000000’ <= VALUES < ‘0100000000’,

  PARTITION ‘0100000000’ <= VALUES < ‘0200000000’,

  PARTITION ‘0200000000’ <= VALUES < ‘0300000000’,

  PARTITION ‘0300000000’ <= VALUES < ‘0400000000’,

  PARTITION ‘0400000000’ <= VALUES < ‘0500000000’,

  PARTITION ‘0500000000’ <= VALUES < ‘0600000000’,

  PARTITION ‘0600000000’ <= VALUES < ‘0700000000’,

  PARTITION ‘0700000000’ <= VALUES < ‘0800000000’,

  PARTITION ‘0800000000’ <= VALUES < ‘0900000000’,

  PARTITION ‘0900000000’ <= VALUES < ‘1000000000’,

  PARTITION OTHERS ))

Partitioning can take time based on the table size and the following SQL query can be used to find out the progress.

SELECT * from M_JOB_PROGRESS

While partitioning, keep in mind the performance impact on the database and keep monitoring the alerts from Solution Manager, if you have it configured already.

During the partitioning process, we have received the following alerts but there is no severe impact on the performance as the partitioning is done in a period of low activity. But, it’s recommended to have a monitoring setup.

  1. HANA Memory Usage – Memory Usage of Services
  2. MVCC Versions
  3. Percentage of Transactions Blocked
  4. Long-Running Uncommitted Write Transactions

Once Partitioned, move the historical partitions to PAGE LOADABLE.

Statement ‘ALTER TABLE SAPABAP1.CDHDR ALTER PARTITION 1 PAGE LOADABLE’ 

successfully executed in 1:08.042 minutes  (server processing time: 1:07.851 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDHDR ALTER PARTITION 2 PAGE LOADABLE’ 

successfully executed in 1:08.998 minutes  (server processing time: 1:08.811 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDHDR ALTER PARTITION 3 PAGE LOADABLE’ 

successfully executed in 1:12.009 minutes  (server processing time: 1:11.830 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDHDR ALTER PARTITION 4 PAGE LOADABLE’ 

successfully executed in 1:04.275 minutes  (server processing time: 1:04.094 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 1 PAGE LOADABLE’ 

successfully executed in 3.513 seconds  (server processing time: 3.238 seconds) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 2 PAGE LOADABLE’ 

successfully executed in 2:19.986 minutes  (server processing time: 2:19.805 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 3 PAGE LOADABLE’ 

successfully executed in 2:14.751 minutes  (server processing time: 2:14.573 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 4 PAGE LOADABLE’ 

successfully executed in 4:12.651 minutes  (server processing time: 4:12.475 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 5 PAGE LOADABLE’ 

successfully executed in 4:56.709 minutes  (server processing time: 4:56.533 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 6 PAGE LOADABLE’ 

successfully executed in 7:26.954 minutes  (server processing time: 7:26.776 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 7 PAGE LOADABLE’ 

successfully executed in 7:12.293 minutes  (server processing time: 7:12.116 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 8 PAGE LOADABLE’ 

successfully executed in 9:15.744 minutes  (server processing time: 9:15.566 minutes) – Rows Affected: 0 

Statement ‘ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 9 PAGE LOADABLE’ 

successfully executed in 17:31.126 minutes  (server processing time: 17:30.949 minutes) – Rows Affected: 0

Buffer Cache Size : 

As buffer cache is the key factor that holds the PAGES when accessed, it needs to be sized properly.

If you keep the buffer cache too high, it’s mostly left unused.

If you keep the buffer cache too small, buffer overflow events can occur and create short dumps.

By Default, Buffer Cache is sized at 10% of GAL and we can customize it using the below parameters.

 

indexserver.ini -> [buffer_cache_cs] -> max_size : Maximum Buffer Cache Size in MB

indexserver.ini -> [buffer_cache_cs] -> max_size_rel : Maximum buffer cache size relative to GAL in %

 

As part of the NSE Implementation activity, we offloaded 180GB of tables and we kept the initial buffer cache size as 100MB, which is too low to start with, but kept it like that for testing.

We had requested our master data team to load some master data and they had observed the below short dumps.

We increased the buffer cache size to 10GB and we haven’t received any dumps further.

Buffer Cache needs to be monitored to review hit ratio, buffer reuse count etc

Buffer Cache can be monitored using : 

SELECT * FROM M_BUFFER_CACHE_STATISTICS

Results: 

After moving a few tables to PAGE LOADABLE, we have seen a reduction in HANA memory usage by around 240GB, which is a significant amount considering the database size.

 

Table Partition Sizes and Record Count post partitioning :

 

Important SAP Notes 

3123259 – SAP HANA Native Storage Extension 2.0 SPS 06 Functional Restrictions

2927591 – SAP HANA Native Storage Extension 2.0 SPS 05 Functional Restrictions

2799997 – FAQ: SAP HANA Native Storage Extension (NSE)

 

Assigned Tags

      4 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Karthikeyan SalaiVijayan
      Karthikeyan SalaiVijayan

      Sri Krishna Chaitanya Kuppili Nice blog with real-time experience details. I would like to know how NSE would fare during INSERT/UPDATE/UPSERT operations, which happens to delta store and in which case, whether the entire table will be loaded to memory or desired partition/page. Loading full table to memory during these operations violates the benefit of NSE.

      Author's profile photo Sri Krishna Chaitanya Kuppili
      Sri Krishna Chaitanya Kuppili
      Blog Post Author

      Hi Karthikeyan SalaiVijayan, Yes, there is a tradeoff between performance and memory usage. We need to choose the tables, buffer cache size and partitioning strategy such a way, so that, the performance degradation, if any is an acceptable range by the customer and also gives other benefits on memory utilization.

      We noticed a 10% increase in elapsed time/exec for INSERTs on CDHDR, which is accepted by the customer given the benefits.

      Author's profile photo Claudia Semmler
      Claudia Semmler

      Sri Krishna Chaitanya Kuppili : thanks for sharing your experience! Based on the tables you covered (e.g. CDHDR/CDPOS) I assume you were working on an ECC or SAP S/4HANA system, right?

      I especially liked the details on the runtime it took to set the partitions to 'page loadable'. It may be helpful if you could add the size of partitions in GB or the number of records per partition so that readers can roughly see the relation between the volume of data and the time it takes to change the load unit.

      Maybe the following SAP Notes may also be useful for your readers:

      2973243 - Guidance for use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP Business Suite powered by SAP HANA

      2869647 - Guidance for use of Data Aging in SAP S/4HANA

       

      Author's profile photo Sri Krishna Chaitanya Kuppili
      Sri Krishna Chaitanya Kuppili
      Blog Post Author

      Thank You Claudia Semmler . Updated the required details in the blog.