Skip to Content
Business Trends

SAP HANA Native Storage Extension: A Native Warm Data Tiering Solution

With the exponential growth of data showing no signs of slowing, customers looking to lower their total cost of ownership (TCO), while still leveraging their vast reservoirs of data to maximum effect, must continuously manage their data storage costs without forfeiting the high performance they’ve come to expect from SAP HANA. A few data tiering options with different price/performance characteristics are already available, but soon SAP customers will have another option in their data tiering arsenal: a native solution for warm data tiering—the SAP HANA Native Storage Extension.

A Quick Data Tiering Refresher

A quick data-tiering refresher—mission-critical hot data is retained in-memory on the SAP HANA database for real-time processing and analysis. Less frequently used warm data is stored in a lower cost tier, but still managed as a unified part of the SAP HANA database. Rarely used, voluminous cold data is located on the lowest cost storage. Regardless of location, all data remains accessible at any time.

The new SAP HANA Native Storage Extension (NSE) adds a native warm data tier to the SAP HANA database. Any customer-built or SAP-built HANA application that is challenged by ballooning data volumes can leverage this deeply integrated warm data tier. NSE increases SAP HANA data capacity at a low TCO through a simple, scalable landscape that offers great performance. Supporting full SAP HANA functionality and all SAP HANA data types and data models, NSE complements—without replacing—the Extension Node and Dynamic Tiering warm data options. NSE is supported for both on-premise SAP HANA systems, and SAP Cloud Platform, SAP HANA Service.

How Does SAP HANA Native Storage Extension Work?

While hot data is ‘column loadable’, residing completely in-memory for fast processing and loaded from disk into SAP HANA memory in columns, the SAP HANA Native Storage Extension allows a user to specify certain warm data as ‘page loadable’, which is then loaded into memory page by page as required for query processing. Page loadable data does not need to reside completely in-memory, like column loadable data.

NSE reduces the memory footprint of the SAP HANA database with expanded disk capacity and an intelligent buffer cache that transfers pages of data between memory and disk. Query performance differences may be noticeable between warm data and hot data.

NSE Warm Data Tiering

Additional Database Capacity with SAP HANA Native Storage Extension

SAP HANA scale up systems are supported with this upcoming initial NSE release and scale out support will follow in a later release. With NSE, you will be able to expand SAP HANA database capacity with warm data on disk up to about 4x the size of hot data in memory. A relatively small amount of SAP HANA memory for the NSE buffer cache will be needed for paging operations, as the buffer cache can handle 8x its size of warm data on disk. As an example, a 2TB SAP HANA system without NSE equates to a 1TB database in memory. With NSE and the addition of a 500GB buffer cache, you can expand your 1TB database to a 5TB database: 1TB of hot data, 4TB of warm data, and a 500GB buffer cache to page data between memory and disk.

Next Steps

We are excited to roll out this native SAP HANA warm data tiering solution that boasts full SAP HANA functionality at a lower TCO for customers. Want to learn more about SAP HANA Native Storage Extensions? Find out more here or drop me a question in the comments section below.

30 Comments
You must be Logged on to comment or reply to a post.
  • Hi Kumar,

    NSE is an extension and enhancement of the Paged Attributes capability which S/4 already uses for data aging. In that context S/4 is using a version of NSE today, with room to take more advantage of the enhanced capabilities in the future.

    BW is currently focused on the use of Extension Nodes for warm data tiering. BW may evaluate NSE for future use.

    Thanks
    Rob

  • Hi Robert,
    In NSE solution, should we consider the presidency layer(Disk, Data Volume) of HANA as a warm tier or we need to deploy extended storage (Disk) e.g. IQ Sybase which tightly integrated with HANA DB as warm tier.
    Also a possibility of adoption persistent memory(PMEM) as a solution for hot tier and NSE for warm tier same time.
    Thanks,
    Arash

  • Hi Arash,

    NSE is warm data tiering option within the HANA system. Like DT, NSE pages or moves data back and forth from disk to a cache when needed to process queries. A big difference between NSE and DT is that NSE is a feature of the HANA index server and runs as part of that process. Data that is assigned to be "page loadable", meaning that it is being managed by NSE, is written to the same files on disk as in-memory "column loadable" HANA data is written to. For in-memory column loadable HANA data, HANA only needs to read the data from disk immediately after start up. In contrast NSE page loadable data can be read into the buffer cache as needed, the flushed or "ejected" from the cache either when no longer needed or when HANA needs to free up space in the buffer cache for other pages. If the same page is required again then it will be read from disk again.

    Since NSE shares the same disk storage files or "disk persistency" as in-memory HANA data, there is no need to also configure DT. Our current recommendation is that new data tiering projects should evaluate NSE as the first warm data tiering option. Key limitations for NSE in the HANA 2.0 SPS 04 release are that it supports scale-up (single node) systems only and supports up to 10TB of warm data. If you require warm data tiering for scale out (2 or more node) systems and/or data volumes in the range of 11TB -100TB then DT should be considered.

    Yes you can use PMEM storage for in-memory data and still have other tables, partitions, or columns assigned to page loadable storage using NSE.

  • HI Robert,
    Thanks for detailed information,
    I have few more questions,Is NSE compatible for BW on HANA environments.also help me with NSE configuaration guides and administrations Guides.

    Thanks and Regards,
    Kiran

  • Hi, Robert!
    We would like to actively use NSE after our planed upgrade project to SPS 04. In the documentation I can not find any more specific steps that need to be executed before using NSE. Is this true? Isn't there some kind of configuration necessary before using it

    Thanks in advance,
    Markus

  • Hi PRJSYN,

    It is up to individual SAP application teams to decide which HANA features to use. In the case of BW, the BW team is considering adding support for NSE in 2020.

    For NSE documentation I would recommend starting from the SPS 04 New Features section of the documentation which then links to more detailed NSE documentation: https://help.sap.com/viewer/42668af650f84f9384a3337bcd373692/2.0.04/en-US/c71469e026c94cb59003b20ef3e93f03.html

    We will also be delivering a hands-on session for NSE at TechEd this year. The session title is DAT370 - Operating an SAP HANA System with SAP Native Storage Extension

    Thanks
    Rob

  • Hi Markus,

    Data can be allocated to NSE storage at a table, column, or partition level. For existing objects this is done using an ALTER TABLE command.

    There is a recording available for the "What's New in HANA 2.0 SPS04: HANA Data Tiering Options" presentation here: https://event.on24.com/eventRegistration/EventLobbyServlet?target=reg20.jsp&partnerref=hanablog&eventid=1945137&sessionid=1&key=AE112CD83A14398AFBBAE01766DD6EF8&regTag=465648&sourcepage=register

    You can also work through the NSE documentation section in the SAP HANA Platform documentation: https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.04/en-US/4efaa94f8057425c8c7021da6fc2ddf5.html

    This includes a section specifically covering "Getting Started With the Native Storage Extension": https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.04/en-US/7ab2658cce56438c92f1cc7b13c50597.html

    Thanks
    Rob

  • Hi Robert,

    I have read through your blog and also few more blogs over NSE. I feel NSE is a good option among the various data aging solutions as it is easier to manage without any additional nodes. I have some specific queries regarding the same, it would be great if you can help with it:
    1. In my workplace we have SAP BW on HANA 7.5 SP5 with HANA 1.0 SP12. I look NSE here to unload some data to the warm storage instead of having all the data in memory. In this regard, when likely the NSE be made available for SAP BW on HANA?, what will be the supported versions?, what is the plan of SAP for NSE for SAP BW on HANA?
    2. If NSE is made available for SAP BW on HANA, will it be possible to enable NSE only to new providers like ADSO or will also there be an option to set aside some data from older providers like HANA optimised DSO (Standard, Write optimized, Direct update), Cubes etc. to the NSE layers.
    Looking forwards for to hearing from you.

    Thanks and Regards,
    Adi

  • Hi Adi,

    The BW team is targeting support for NSE in Q1/2020. I'm not sure what BW version that will be. You may want to submit a question through the SAP Community Network (https://answers.sap.com/questions/ask.html) and tag it "BW SAP HANA Data Warehousing". That would also be a good forum to ask about what BW objects will be supported for data tiering.

    Thanks
    Rob

    • Hi Udo,

       

      XSA is an application server layer that can run as part of a HANA system. Applications built to run on XSA still connect to a HANA database and access HANA tables. The use of NSE is configured at the table level as part of the CREATE TABLE or ALTER TABLE statements and is transparent to an application that is using the table. Note that the use of NSE - or assignment of a PAGE LOADABLE load unit - can also be controlled when using HDI for the physical data modelling. With HDI, the PAGE LOADABLE load unit assignment is including the hdbtable table definition.

       

      The end result is that there is no XSA specific syntax required to use a table that is either fully or partially assigned to be PAGE LOADABLE (using NSE) whether that table is being accessed through SQL statements or through calculation views.

       

      Thanks

      Rob

    • Hi Sandeep,

       

      NSE is a feature of the core HANA indexserver process and is included and enabled in all HANA 2.0 SPS 04 installations. No additional license is required to use NSE.

       

      Thanks

      Rob

      • Hello Robert,

         

        does it mean, that NSE is also available for customers of HANA with runtime license for BW?

        Thank you in advance for an information, I cannot find it anywhere.

         

        Best regards:

        Lukasz

        • Hi Lukasz,

           

          Yes, as I said NSE is a feature of the core HANA indexserver and is part of all HANA 2.0 SPS 04 systems. That includes runtime systems. While there is no separate license required for NSE, the buffer cache used to manage page loadable data with NSE is part of licensed HANA memory capacity.

           

          BW4/HANA added support for NSE with their SP 04 release in March.

          Thanks

          Rob

  • Hi Rob,

    In your example, enabling NSE means we need a 500GB buffer cache. I assume this will be carved out of the 2TB and hence usable capacity will be 1.5TB which means database size (hot) will be 750GB? Is this a correct understanding while we look at sizing?

    And, even though I get the idea of data temperatures, it looks like we are going back to the old school of buffering data from disk and hence HANA becomes like a hybrid database with in-memory and traditional database styles. I assume the buffer cache will use LRU mechanisms to page out and there will be an impact on performance of queries accessing data stored in NSE. Is it just a price to performance call?

    • Hi Sivaramakrishnan,

       

      In the example given the total HANA memory capacity is being increased from 2 TB to 2.5 TB with the addition of 500 GB of HANA memory capacity for use by the buffer cache. If you were to carve the buffer cache out of the existing 2 TB of HANA memory capacity then following the default sizing guidance you would allocate a maximum of 400 GB of memory to the buffer cache and your maximum data volume would be 800 GB of hot in-memory data + 3.2 TB of warm page loadable data for a total database capacity of 4 TB.

       

      Yes NSE implements database paging for HANA and yes the basic caching algorithm is an LRU algorithm. Since page loadable data using NSE is not always in memory, then operations on that warm data are expected to take longer than operations on pure in-memory data.

       

      Generally speaking, customers who are implementing NSE are looking to manage TCO by reducing their HANA memory footprint while scaling overall HANA data capacity and maintaining reasonable performance for older less frequently accessed data.

      • Hi Robert,

        The above scenario makes sense to me. If i were to carve buffer cache from the existing memory;

        For eg: in a 2TB system, working memory is 50%, hot data is 50%.

        With NSE, lets says buffer cache is 10% i.e 200GB, will the working memory be still at 50% and hot data  40%?

        What is max buffer cache one can reserve or carve out of the memory?

        Does that reduce the hot data in memory?

        What should be the working memory to hot data in memory ratio?

  • Hi Robert,

    The above scenario makes sense to me. If i were to carve buffer cache from the existing memory;

    For eg: in a 2TB system, working memory is 50%, hot data is 50%.

    With NSE, lets says buffer cache is 10% i.e 200GB, will the working memory be still at 50% and hot data  40%?

    What is max buffer cache one can reserve or carve out of the memory?

    Does that reduce the hot data in memory?

    What should be the working memory to hot data in memory ratio?

    • Hi Praveen,

       

      My apologies for the slow response to your questions.

       

      HANA does not enforce a hard split between memory used for in-memory column loadable data and memory used for working memory. When memory is allocated to the buffer cache it is exclusively available to the buffer cache so it takes away from the pool of memory available to be used for in-memory column loadable data and working memory. Note that the buffer cache is only allocated as required, up to the configured limit, but once allocated it remains part of the buffer cache until the system is restarted.

       

      The traditional sizing ratio was to plan on using up to 50% of your physical memory for data and leave 50% of the physical memory for working memory. Current TDI specs recognize that different workloads can vary in their requirements for working memory relative to data volume, so that 50:50 ratio can vary to fit your use case.

       

      If your workload requires you to maintain the same amount of working memory, and if you don't have the option of adding memory to the system (which would be typical for an appliance system), then yes you would want to limit your in-memory data volume to ensure the necessary amount of working memory is available.

       

      Based on the default sizing ratios for NSE, you would allocate up to 20% of your HANA memory as buffer cache. Note that the allocation of memory to the buffer cache is relative the amount of HANA memory, but the recommended or required amount of buffer cache is relative to the warm data volume. The default guidance is a 1:8 ratio of buffer cache : warm data volume. For a traditional disk based database, a typical cache allocaton would be in the range of 10-20% of the data volume, so that 1:8 starting ratio would put you at 12.5%.

      Those aren't physical limits on how much buffer cache can be allocated and there may be use cases where it makes sense to go higher. However, if you are finding that you need to continue increasing the buffer cache size, then you should re-evaluate whether you should be bringing specific pieces of data back into in-memory column loadable storage. The NSE Advisor can help you with that evaluation.

       

       

  • hi , you mentioned S/4 today as part of data aging today already uses NSE. Can you please clarify this a bit more. I dont remember reading about the need to configure a 'buffer cache' for data aging setup in S4. how is is similar / different from the NSE usage described in this blog here

    • The S/4 data aging framework was built on a HANA feature called "paged attributes". The page attributes feature was a limited implementation of disk paging that was sufficient for the S/4 requirements but was not a full implementation suitable for general purpose use. NSE is the extension and enhancement of the paged attributes capability to be a full disk paging capability suitable for general purpose use.

       

      The S/4 data aging framework continues to use the older paged attributes syntax at the database level which is handled by the NSE.

       

      NSE is enabled by default and the default buffer cache configuration is 10% of HANA memory capacity.

       

      Note that with the S/4 HANA 2020 release in October, the ABAP data dictionary (DDIC) is now NSE aware and it is possible to use NSE beyond just the data aging framework.

       

      • Hi, Robert

        Updated SAP Note 2816823 have next info :

        SAP S/4HANA releases prior to SAP S/4HANA 2020 (that is, SAP S/4HANA 1909 (any SPS) and older), as well as SAP Business Suite powered by SAP HANA

        Use of NSE in an SAP S/4HANA or SAP Business Suite powered by SAP HANA systems outside of the context of data aging is supported under the following conditions:

        • SAP Note 2898319 or a correspondingly higher version of SAP_BASIS is implemented in the system. This ensures protection of the load unit that was set on the database level during table conversion e.g. in the course of upgrade events. This protection is provided for load unit settings on the level of the entire table, individual partitions, and individual columns.
        • You have carefully evaluated the use of NSE for the database object (table, table column, or table partition) in question and verified that performance impact caused by the use of NSE is acceptable.

        It should also be noted that currently the load unit must be defined via database means, e.g. using the SAP HANA cockpit. It cannot be defined in the ABAP Data Dictionary (transaction SE11 -> SE13).

         

        Can we use native NSE configuration (via ALTER TABLE) on S/4HANA 1709 ?  Is all S/4HANA tables supported or just, for example, technical tables ?

         

        Best regards,

        Dmitry

        • Hi Dmitry,

           

          Yes, as per that SAP Note you may manually implement NSE at the database level as long as you have applied the documented SAP Notes.

           

          While you can physically configure any object to be page loadable, remember that there is a performance cost to using page loadable data as compared to pure in-memory column loadable data. Hence the point in the SAP Note stating:

          • You have carefully evaluated the use of NSE for the database object (table, table column, or table partition) in question and verified that performance impact caused by the use of NSE is acceptable.

          We would still recommend that any use of NSE in Suite on HANA systems be implemented first as a POC in test environment and ideally work directly with your SAP account team for expert guidance.

           

          Thanks

          Rob

  • I heard it will be available for SAP CAR . Does NSE for HANA for SAP CAR require additional lincense? In CAR POS DTA we have big POS transaction tables so it is very important to understand TCO.

     

    • Yes, SAP Customer Activity Repository ("CAR") supports NSE. There is no additional HANA license required for NSE. NSE is a feature of the core HANA server and is available with all HANA versions. I am not aware of any additional license requirements for CAR itself to leverage their data tiering capabilities, but that question should be directed to the CAR team.