Skip to Content
Product Information
Author's profile photo vijay kumar Munikoti

NSE(Native Storage Extension) Data Tiering Options

                                                      NSE Whitepaper

What is NSE ?

HANA SPS 04 version has introduced the NSE. NSE is used to store the warm data. HANA was used to store the hot data in memory but as data growth occurred in some of the organization the need for another store came into picture and SAP came out with a solution to introduce the another store called as warm store which in turn called as NSE.

Please refer the below  block diagram by SAP :

 

Customers implementing the SAP NSE solution:

Customers who are planning to think about implementing the NSE should closely monitor the data growth. They need to compare the total memory size vs the used size.

In our system we found the database growth of 61.91% per year so we moved towards implementing the NSE.

In-Memory/Hybrid/On-Disk – NSE has the feature to store the data in disk-based column store and Hana has the feature to store the data in in-Memory column store.So it has a hybrid column store approach.

NSE integration is based on the Hana Persistence layer in close connection to Page Access and Resource Manager.

Buffer Cache ( BC ) is required for performance access to pages on disk.The buffer cache should avoid redundant I/O operations by keeping pages which are access frequently in memory rather than reading them from the disk repeatedly.The Buffer Cache uses LRU (Last Recently Used) and HBL (Hot Buffer List) strategies and reuses the pages from the internal pools instead of allocating/deallocating pages via HANA memory management.

                                       
NSE Advisor:
With the help of the NSE Advisor the objects (tables, partitions or columns) that a
re suitable to be converted to page loadable (to save the memory space) or to column loadable (to improve the performance) can be identified within the recommendations result view.
NSE Functional Restriction:2
Consider the following when storing large data sets in NSE on servers with limited memory capacity
  • • HANA as an in-memory database executes queries with allocating transient data and interim results in memory. Queries do not page interim results or parts of interim results from memory to disk.
    • HANA keeps NSE data in memory in a buffer cache. Low hit rates in the buffer cache can cause insufficient query performance due to high number of disk reads.
    • SAP provides general guidelines about the buffer cache sizing in the SAP HANA Administration Guide for SAP HANA Platform. Deviations from the guidelines require application based sizing or proof of concepts with workload simulations.
    • Users can store warm data in NSE instead of on Dynamic Tiering. In contrast to Dynamic Tiering, the query execution in the HANA service, storing the NSE data, creates transient data and interim result in memory only. Thus, the memory requirement for a comparable workload can be higher with NSE. A solution to migrate data from Dynamic Tiering to NSE in on the road map for SAP HANA.
    • SAP HANA NSE is for scale-up systems. For scale-out systems, SAP HANA does not check if users create tables with page-loadable columns or convert tables to page-loadable.
    • SAP HANA NSE supports partition load units for heterogeneous partitions but does not support partition load unit for non-heterogeneous partitions.
    • Specifying partition-level load unit is supported for the following partitioning schemes:
    -Unbalanced range
    -Unbalanced range-range
    For all other partitioning schemes used in SAP NSE tables, load units can be specified only on column, table, and index.
NSE Advisor Usage:
1. Identify representative workload of your system
2. Optionally configure the NSE Advisor
3. Enable the NSE Advisor
4. Run the representative workload
    a. Monitor the performance of statements
    b. Monitor the memory usage
    c. Monitor the duration of processes
5. Disable the NSE Advisor
6. Evaluate and save the recommendations of the NSE Advisor
7. Migrate the objects selected from the recommendations
8. Run the representative workload
    a. Monitor performance of statements
    b. Monitor the memory usage
    c. Monitor the duration of processes
9. Iterate your tests, e.g. restart from step 2, until you have a proven set-up, which fits your requirements.
Configure NSE Advisor :
ALTER SYSTEM ALTER CONFIGURATION ( ‘indexserver.ini’ , ‘SYSTEM’ ) SET ( ‘cs_nse_advisor’, ‘min_object_size’ ) = ‘XXX’ WITH RECONFIGURE ; —
default value 1048576 = 1 MiB
Enable NSE Advisor:
ALTER SYSTEM ALTER CONFIGURATION ( ‘indexserver.ini’ , ‘system’ ) SET ( ‘cs_access_statistics’,’collection_enabled’ ) = ‘true’ WITH RECONFIGURE;
Disable NSE Advisor:
ALTER SYSTEM ALTER CONFIGURATION ( ‘indexserver.ini’ , ‘system’ ) SET ( ‘cs_access_statistics’,’collection_enabled’ ) = ‘false’ WITH RECONFIGURE;
Evaluate and save the NSE advisor run.

Hana Data Tiering Options are as follows:

  • Hot Store

–   Persistent Memory

  • Warm Store

– Native Storage Extension , Extension Node, Dynamic Tiering

  •        Cold Store

– Spark Controller

NSE Value Proposition and Use Case:

  • Value proposition:
  • Increase HANA data capacity at low TCO
  • Deeply integrated warm data tier, with full HANA functionality
  • Will support all HANA data types and data models
  • Simple system landscape
  • Scalable with good performance
  • Supported for both HANA on-premise and HANA-as-a-Service (HaaS)
  • Available for any HANA application
  • Complements, without replacing, other warm data tiering solutions (extension nodes, dynamic tiering)
  • Use cases:
  • Any customer built or SAP built HANA application that is challenged by growing data volumes
  • S/4HANA data aging (NSE is an evolution of “paged attributes”)
  • BW team currently uses extension nodes, but they communicated in TECH ED 2019 that NSE is certified for BW/4HANA

Specifying data as “page loadable”

  • Data may be specified as “page loadable” at table level, partition level, and column level
  • Data may be converted between “page loadable” and “column loadable”
  • NSE supports range, range-range, and hash partitioned tables
  • For hash partitioning the entire table or column must be page loadable or column loadable

NSE Technical Overview:

  • The HANA column store and row store each have a buffer cache.
  • Column loadable data is fully loaded into memory from disk.
  • Page loadable data is loaded from disk into the buffer cache, page by page as needed.
  • Converting column/row loadable data to page loadable format moves the data into the buffer cache.
  • When buffer cache is full, it will eject pages intelligently based on user access patterns.
  • Warm and hot data are written together from main store to disk during normal save point operations. The write-optimized store is not paged

Tooling:

HANA Cockpit:

  • Configure buffer cache size (on-premise only; HaaS will configure this for the user)
  • Configure tables, columns, and partitions as “page loadable”
  • Monitor buffer cache usage and capacity
  • Report on resident memory status for page loadable data
  • Includes rule-based “recommendation engine” to monitor user data access patterns.
  • Based on statistics, the engine will advise user on which tables, columns, or partitions would benefit from being converted to “page loadable”

Data Lifecycle Manager (DLM):

  • DLM tool will allow user to convert tables, columns, and table partitions between “column loadable” and “page loadable”

Web IDE:

  • Visualized query plan will display when warm data is accessed from NSE in order to satisfy the query

On-premise sizing

  • HANA system must be scale up (first release)
  • Determine volume of warm data to add to the HANA database
  • May add as much warm storage as desired – up to 1:4 ratio of HANA hot data in memory to warm data on disk
  • NSE disk store should be no larger than 10TB for first release of NSE
  • Divide volume of warm data by 8 – this is size of memory buffer cache required to manage warm data on disk
  • Either add more HANA memory for buffer cache, or use some of existing HANA memory for buffer cache (will reduce hot data volume)
  • Work area should be same size as hot data in memory (equivalent to HANA with no NSE)

SAP HANA Extension Node – Whats New in SPS04

Common characteristics:

  • HANA node in the scale-out landscape is reserved for warm-data storage and processing
  • Supports all HANA operations and data management features
  • Allows larger data footprint of up to 200% of the node DRAM size
  • HANA persistent memory is supported

New Features:

  • Benefits from new partitioning and scale-out features in SPS04:

–range-hash partitioning scheme

–“pinning” tables on fixed HANA nodes

–partition grouping

Warm Store Options – Getting Started

Cold Store:

Which Data Tier Should I Use ?

 

I tried to explain the best way of handling the NSE and also show case how to Select the NSE for each customers.In my next article I will try to articulate all the technical changes required for the NSE and also introduce the data aging concepts which is key to NSE.

Summary:

SAP HANA offers with NSE another warm data tiering option which is completely integrated and can be seamlessly used out of the box since SP40.

While adapting existing column store persistence building blocks to handle the new advanced page attribute behavior compared to the already existing PA every other component in Hana is able to work as designed.

As NSE will serve under warm data high query performance KPIs will logically not be reached. Since loading data from Disk to Memory will take its time.

Data will be held in the so-called Buffer Cache with an initial value of 10 % of the HANA memory and NSE Advisor will help to find objects which should be converted to page loadable.

Assigned Tags

      17 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Akshat Malhotra
      Akshat Malhotra

      Hi Vijay ,

      Thanks for sharing the blog . I had a question on data tiering options in SCP Neo platform .

      Are there possibilities for NSE , Extension Nodes and Dynamic Tiering in SCP Neo for HANA? If Yes can you provide some links it would be very helpful . If not , are there options for scale out scenarios in Neo Platform ?

      Regards,

      Akshat Malhotra

      Author's profile photo David Merchan
      David Merchan

      Hi Akshat,

       

      NSE is only available on SAP HANA Cloud, is not available on SCP NEO

       

      Author's profile photo Sebastian Gesiarz
      Sebastian Gesiarz

      Hello David,

       

      Could you please point me to the documentation on how to use NSE in HANA Cloud (not 2.0)?

      We recently learned that Data Warehousing Foundation addon is not supported together with it's Data Lifecycle Management component.

      We are looking for an alternative way to automate data cooling towards cold store on Azure.

       

      Thanks,

      Sebastian

      Author's profile photo Madhur Chichani
      Madhur Chichani

       

      Hi

       

      thanks for sharing information..

      Here I have 2 questions

      1. Do we need to create separate mount point to maintain cold data on existing host and get replicated to other host as  my systems running on HA/DR configuration model(non distributed system)

      OR

      1. For non-distributed systems is this okay to keep data under same mount /hana/data.

      Kindly advise.

      Author's profile photo Jens Gleichmann
      Jens Gleichmann

      Hi,

       

      with NSE you use exact the same persistence like without it. This means the data stays as before under /hana/data/* . HSR and B&R include all warm NSE data. There is nothing you have to change.

       

      Regards,

      Jens

      Author's profile photo Madhur Chichani
      Madhur Chichani

      Thanks, Jens. I understood the concepts but how we can monitor, calculate how much % or a certain amount of memory has been released from the main memory once we move data to NSE.

      I know we have monitoring tools are available but actually how much it benefited in terms of usages I'm more interested.

       

      thanks again

      Madhur

      Author's profile photo Jens Gleichmann
      Jens Gleichmann

      Hi Madhur,

      it always depends. If you activate NSE for more than one table the buffer can be overcommitted. Like in a virtual environment you overcommit resources. In this context via LRU the buffer cache. For instance you have a system with 1TB memory (500GB data). The buffer cache is 10% percent which means 100GB. You can move 2 TB data into NSE (ratio 1:4 of hot data in memory to warm data on disk). If you move 200GB data from hot to warm, you will save in the end about 100GB memory depending on the hit ratio / sizing of your buffer cache. You can monitor this with the HANA Cockpit or the monitoring views (M_CS_TABLES / M_CS_ALL_COLUMNS - columns *page_loadable*).

       

      Regards,

      Jens

      Author's profile photo Madhur Chichani
      Madhur Chichani

      Thank you Jens for an explanation.

      Author's profile photo Christoph Streubert
      Christoph Streubert

      HI, thank you very much for your NSE write up. Can you refer to any specific performance impact numbers you have seen when following the NSE advisor?

      Author's profile photo Jens Gleichmann
      Jens Gleichmann

      Hi Christoph,

      the NSE advisor impact can be significant. We have seen CPU overhead of 10-30%. You should run in for at least 7 days to get meaningful data. You can also use capture & replay to avoid the massive load on your prod. system. But to be honest you will not get enough data in 7 days to cover all scenarios, e.g. quarterly / yearly closing. My favorite variant is not the advisor. I go for the manual variant to analyse the top tables and SQLs to know how the data is selected.

      If you need more details just drop me a mail or PM.

       

      Regards,

      Jens

      Author's profile photo Vivek Battu
      Vivek Battu

      Hello,

      Can we keep the Warm data from NSE into a persistence memory mount?

       

      Thanks,

      Vivek Battu

      Author's profile photo Jens Gleichmann
      Jens Gleichmann

      Hello,

       

      there is no known limitation not to use pmem with NSE. But keep in mind that only the main part will be stored within pmem. The delta part stays in DRAM.

       

      Regards,

      Jens

       

      Author's profile photo Clement Mugner
      Clement Mugner

      Hi Jens,

      my understanding is that the NSE buffer cache is in DRAM. So by activating NSE, you actually move the page loadable part of the main from PMEM to DRAM.

      Of course the actually loaded part of the main is not as large as the column loadable part that is in PMEM before activation. So overall, the memory usage decreases. But i would expect an increase in DRAM usage.

      Clément

      Author's profile photo Jens Gleichmann
      Jens Gleichmann

      Hi Clement,

       

      you are completely right, the buffer cache is a heap allocator (Pool/CS/BufferPage) located inside the DRAM which can not be placed into pmem. Only main parts of tables can be placed into pmem.

      My answer was whether NSE and pmem can be used together. If you only want to page out columns or partitions the rest which normally placed into the hot store (main part) can be used with pmem. The warm data placed inside the buffer cache stays always inside DRAM.

       

      I also mentioned it in my last blog for NSE Q&A.

       

      Regards,

      Jens

      Author's profile photo Jens Becher
      Jens Becher

      Hello,

      just read the blog. Very interesting, and especially interesting because we have a running project for using nse.

      My question(s) to the pro's: we have used data aging and the parameters for page memory pool - page_loadable_columns_limit. Does this parameter still has a meaning, i.e. still influneces some paging behaviour? I interpret the documentation so, that after setting buffer cache with max_size_rel, the parameter page_loadable_columns_limit is useless.

      Is this correct?

       

      Additionally, when looking into view m_memory_object_dispositions, the value's for page_loadable_columns_object_counts/_size seem to belong to aggregator Persistency/BufferCache*, whereas in Release 1 they had values for Aggregator (type) Persistency/Pages/Default.  Could someone explain this?

      Thanks in advance and best regards,

       

      Jens

      Author's profile photo Jens Gleichmann
      Jens Gleichmann

      Hi Jens,

      yes, your are right. If you set max_size or max_size_rel, the parameter page_loadable_columns_limit will be ignored. See note 3013750

       

      Regarding your second question I would like to understand which values you want to monitor. In the end SAP adds / removes columns of monitoring views with every SPS / revision.

       

      Regards,

      Jens

      Author's profile photo Hervian Hervian
      Hervian Hervian

      Hi, is it possible to use NSE technologies in row store tables?