Skip to Content
Business Trends
Author's profile photo Robert Waywell

SAP HANA Native Storage Extension: A Native Warm Data Tiering Solution

With the exponential growth of data showing no signs of slowing, customers looking to lower their total cost of ownership (TCO), while still leveraging their vast reservoirs of data to maximum effect, must continuously manage their data storage costs without forfeiting the high performance they’ve come to expect from SAP HANA. A few data tiering options with different price/performance characteristics are already available, but soon SAP customers will have another option in their data tiering arsenal: a native solution for warm data tiering—the SAP HANA Native Storage Extension.

A Quick Data Tiering Refresher

A quick data-tiering refresher—mission-critical hot data is retained in-memory on the SAP HANA database for real-time processing and analysis. Less frequently used warm data is stored in a lower cost tier, but still managed as a unified part of the SAP HANA database. Rarely used, voluminous cold data is located on the lowest cost storage. Regardless of location, all data remains accessible at any time.

The new SAP HANA Native Storage Extension (NSE) adds a native warm data tier to the SAP HANA database. Any customer-built or SAP-built HANA application that is challenged by ballooning data volumes can leverage this deeply integrated warm data tier. NSE increases SAP HANA data capacity at a low TCO through a simple, scalable landscape that offers great performance. Supporting full SAP HANA functionality and all SAP HANA data types and data models, NSE complements—without replacing—the Extension Node and Dynamic Tiering warm data options. NSE is supported for both on-premise SAP HANA systems, and SAP Cloud Platform, SAP HANA Service.

How Does SAP HANA Native Storage Extension Work?

While hot data is ‘column loadable’, residing completely in-memory for fast processing and loaded from disk into SAP HANA memory in columns, the SAP HANA Native Storage Extension allows a user to specify certain warm data as ‘page loadable’, which is then loaded into memory page by page as required for query processing. Page loadable data does not need to reside completely in-memory, like column loadable data.

NSE reduces the memory footprint of the SAP HANA database with expanded disk capacity and an intelligent buffer cache that transfers pages of data between memory and disk. Query performance differences may be noticeable between warm data and hot data.

NSE Warm Data Tiering

Additional Database Capacity with SAP HANA Native Storage Extension

SAP HANA scale up systems are supported with this upcoming initial NSE release and scale out support will follow in a later release. With NSE, you will be able to expand SAP HANA database capacity with warm data on disk up to about 4x the size of hot data in memory. A relatively small amount of SAP HANA memory for the NSE buffer cache will be needed for paging operations, as the buffer cache can handle 8x its size of warm data on disk. As an example, a 2TB SAP HANA system without NSE equates to a 1TB database in memory. With NSE and the addition of a 500GB buffer cache, you can expand your 1TB database to a 5TB database: 1TB of hot data, 4TB of warm data, and a 500GB buffer cache to page data between memory and disk.

Next Steps

We are excited to roll out this native SAP HANA warm data tiering solution that boasts full SAP HANA functionality at a lower TCO for customers. Want to learn more about SAP HANA Native Storage Extensions? Find out more here or drop me a question in the comments section below.

Assigned tags

      35 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Kumar Avi
      Kumar Avi

      Hi Robert,
      Are there any SAP products e.g. S/4, BW which are already leveraging NSE?

      Regards, Avi

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Kumar,

      NSE is an extension and enhancement of the Paged Attributes capability which S/4 already uses for data aging. In that context S/4 is using a version of NSE today, with room to take more advantage of the enhanced capabilities in the future.

      BW is currently focused on the use of Extension Nodes for warm data tiering. BW may evaluate NSE for future use.

      Thanks
      Rob

      Author's profile photo Arash Shojaei
      Arash Shojaei

      Hi Robert,
      In NSE solution, should we consider the presidency layer(Disk, Data Volume) of HANA as a warm tier or we need to deploy extended storage (Disk) e.g. IQ Sybase which tightly integrated with HANA DB as warm tier.
      Also a possibility of adoption persistent memory(PMEM) as a solution for hot tier and NSE for warm tier same time.
      Thanks,
      Arash

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Arash,

      NSE is warm data tiering option within the HANA system. Like DT, NSE pages or moves data back and forth from disk to a cache when needed to process queries. A big difference between NSE and DT is that NSE is a feature of the HANA index server and runs as part of that process. Data that is assigned to be "page loadable", meaning that it is being managed by NSE, is written to the same files on disk as in-memory "column loadable" HANA data is written to. For in-memory column loadable HANA data, HANA only needs to read the data from disk immediately after start up. In contrast NSE page loadable data can be read into the buffer cache as needed, the flushed or "ejected" from the cache either when no longer needed or when HANA needs to free up space in the buffer cache for other pages. If the same page is required again then it will be read from disk again.

      Since NSE shares the same disk storage files or "disk persistency" as in-memory HANA data, there is no need to also configure DT. Our current recommendation is that new data tiering projects should evaluate NSE as the first warm data tiering option. Key limitations for NSE in the HANA 2.0 SPS 04 release are that it supports scale-up (single node) systems only and supports up to 10TB of warm data. If you require warm data tiering for scale out (2 or more node) systems and/or data volumes in the range of 11TB -100TB then DT should be considered.

      Yes you can use PMEM storage for in-memory data and still have other tables, partitions, or columns assigned to page loadable storage using NSE.

      Author's profile photo Wolfgang Albert Epting
      Wolfgang Albert Epting

      I have a customer question if there are plans to retrofit Suite on HANA to be able to leverage NSE?

      Author's profile photo PRJSYN BasisAdmin
      PRJSYN BasisAdmin

      HI Robert,
      Thanks for detailed information,
      I have few more questions,Is NSE compatible for BW on HANA environments.also help me with NSE configuaration guides and administrations Guides.

      Thanks and Regards,
      Kiran

      Author's profile photo Markus Theilen
      Markus Theilen

      Hi, Robert!
      We would like to actively use NSE after our planed upgrade project to SPS 04. In the documentation I can not find any more specific steps that need to be executed before using NSE. Is this true? Isn't there some kind of configuration necessary before using it

      Thanks in advance,
      Markus

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Wolfgang,

      S/4 HANA uses NSE for data aging but there are no plans for Suite on HANA to leverage NSE.

      Thanks
      Rob

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi PRJSYN,

      It is up to individual SAP application teams to decide which HANA features to use. In the case of BW, the BW team is considering adding support for NSE in 2020.

      For NSE documentation I would recommend starting from the SPS 04 New Features section of the documentation which then links to more detailed NSE documentation: https://help.sap.com/viewer/42668af650f84f9384a3337bcd373692/2.0.04/en-US/c71469e026c94cb59003b20ef3e93f03.html

      We will also be delivering a hands-on session for NSE at TechEd this year. The session title is DAT370 - Operating an SAP HANA System with SAP Native Storage Extension

      Thanks
      Rob

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Markus,

      Data can be allocated to NSE storage at a table, column, or partition level. For existing objects this is done using an ALTER TABLE command.

      There is a recording available for the "What's New in HANA 2.0 SPS04: HANA Data Tiering Options" presentation here: https://event.on24.com/eventRegistration/EventLobbyServlet?target=reg20.jsp&partnerref=hanablog&eventid=1945137&sessionid=1&key=AE112CD83A14398AFBBAE01766DD6EF8&regTag=465648&sourcepage=register

      You can also work through the NSE documentation section in the SAP HANA Platform documentation: https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.04/en-US/4efaa94f8057425c8c7021da6fc2ddf5.html

      This includes a section specifically covering "Getting Started With the Native Storage Extension": https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.04/en-US/7ab2658cce56438c92f1cc7b13c50597.html

      Thanks
      Rob

      Author's profile photo Adinarayanan MN
      Adinarayanan MN

      Hi Robert,

      I have read through your blog and also few more blogs over NSE. I feel NSE is a good option among the various data aging solutions as it is easier to manage without any additional nodes. I have some specific queries regarding the same, it would be great if you can help with it:
      1. In my workplace we have SAP BW on HANA 7.5 SP5 with HANA 1.0 SP12. I look NSE here to unload some data to the warm storage instead of having all the data in memory. In this regard, when likely the NSE be made available for SAP BW on HANA?, what will be the supported versions?, what is the plan of SAP for NSE for SAP BW on HANA?
      2. If NSE is made available for SAP BW on HANA, will it be possible to enable NSE only to new providers like ADSO or will also there be an option to set aside some data from older providers like HANA optimised DSO (Standard, Write optimized, Direct update), Cubes etc. to the NSE layers.
      Looking forwards for to hearing from you.

      Thanks and Regards,
      Adi

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Adi,

      The BW team is targeting support for NSE in Q1/2020. I'm not sure what BW version that will be. You may want to submit a question through the SAP Community Network (https://answers.sap.com/questions/ask.html) and tag it "BW SAP HANA Data Warehousing". That would also be a good forum to ask about what BW objects will be supported for data tiering.

      Thanks
      Rob

      Author's profile photo Udo Neumann
      Udo Neumann

      Hi Rob,

      is it possible to use NSE with XSA. After checking the documentation of XSA it seems that NSE is not supported.

      Regards,

      Udo

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Udo,

       

      XSA is an application server layer that can run as part of a HANA system. Applications built to run on XSA still connect to a HANA database and access HANA tables. The use of NSE is configured at the table level as part of the CREATE TABLE or ALTER TABLE statements and is transparent to an application that is using the table. Note that the use of NSE - or assignment of a PAGE LOADABLE load unit - can also be controlled when using HDI for the physical data modelling. With HDI, the PAGE LOADABLE load unit assignment is including the hdbtable table definition.

       

      The end result is that there is no XSA specific syntax required to use a table that is either fully or partially assigned to be PAGE LOADABLE (using NSE) whether that table is being accessed through SQL statements or through calculation views.

       

      Thanks

      Rob

      Author's profile photo Sandeep Hanumaihgari
      Sandeep Hanumaihgari

      Hi Robert,

      For the customer holding enterprise edition license, do we need to purchase any extra license to enable NSE.

      Thanks,

      Sandeep

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Sandeep,

       

      NSE is a feature of the core HANA indexserver process and is included and enabled in all HANA 2.0 SPS 04 installations. No additional license is required to use NSE.

       

      Thanks

      Rob

      Author's profile photo Lukasz Kurlit
      Lukasz Kurlit

      Hello Robert,

       

      does it mean, that NSE is also available for customers of HANA with runtime license for BW?

      Thank you in advance for an information, I cannot find it anywhere.

       

      Best regards:

      Lukasz

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Lukasz,

       

      Yes, as I said NSE is a feature of the core HANA indexserver and is part of all HANA 2.0 SPS 04 systems. That includes runtime systems. While there is no separate license required for NSE, the buffer cache used to manage page loadable data with NSE is part of licensed HANA memory capacity.

       

      BW4/HANA added support for NSE with their SP 04 release in March.

      Thanks

      Rob

      Author's profile photo Lukasz Kurlit
      Lukasz Kurlit

      Hi Robert,

       

      thank you so much for fast and clear answer.

       

      Best regards:

      Lukasz

      Author's profile photo Sivaramakrishnan R
      Sivaramakrishnan R

      Hi Rob,

      In your example, enabling NSE means we need a 500GB buffer cache. I assume this will be carved out of the 2TB and hence usable capacity will be 1.5TB which means database size (hot) will be 750GB? Is this a correct understanding while we look at sizing?

      And, even though I get the idea of data temperatures, it looks like we are going back to the old school of buffering data from disk and hence HANA becomes like a hybrid database with in-memory and traditional database styles. I assume the buffer cache will use LRU mechanisms to page out and there will be an impact on performance of queries accessing data stored in NSE. Is it just a price to performance call?

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Sivaramakrishnan,

       

      In the example given the total HANA memory capacity is being increased from 2 TB to 2.5 TB with the addition of 500 GB of HANA memory capacity for use by the buffer cache. If you were to carve the buffer cache out of the existing 2 TB of HANA memory capacity then following the default sizing guidance you would allocate a maximum of 400 GB of memory to the buffer cache and your maximum data volume would be 800 GB of hot in-memory data + 3.2 TB of warm page loadable data for a total database capacity of 4 TB.

       

      Yes NSE implements database paging for HANA and yes the basic caching algorithm is an LRU algorithm. Since page loadable data using NSE is not always in memory, then operations on that warm data are expected to take longer than operations on pure in-memory data.

       

      Generally speaking, customers who are implementing NSE are looking to manage TCO by reducing their HANA memory footprint while scaling overall HANA data capacity and maintaining reasonable performance for older less frequently accessed data.

      Author's profile photo Praveen Javehrani
      Praveen Javehrani

      Hi Robert,

      The above scenario makes sense to me. If i were to carve buffer cache from the existing memory;

      For eg: in a 2TB system, working memory is 50%, hot data is 50%.

      With NSE, lets says buffer cache is 10% i.e 200GB, will the working memory be still at 50% and hot data  40%?

      What is max buffer cache one can reserve or carve out of the memory?

      Does that reduce the hot data in memory?

      What should be the working memory to hot data in memory ratio?

      Author's profile photo Praveen Javehrani
      Praveen Javehrani

      Hi Robert,

      The above scenario makes sense to me. If i were to carve buffer cache from the existing memory;

      For eg: in a 2TB system, working memory is 50%, hot data is 50%.

      With NSE, lets says buffer cache is 10% i.e 200GB, will the working memory be still at 50% and hot data  40%?

      What is max buffer cache one can reserve or carve out of the memory?

      Does that reduce the hot data in memory?

      What should be the working memory to hot data in memory ratio?

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Praveen,

       

      My apologies for the slow response to your questions.

       

      HANA does not enforce a hard split between memory used for in-memory column loadable data and memory used for working memory. When memory is allocated to the buffer cache it is exclusively available to the buffer cache so it takes away from the pool of memory available to be used for in-memory column loadable data and working memory. Note that the buffer cache is only allocated as required, up to the configured limit, but once allocated it remains part of the buffer cache until the system is restarted.

       

      The traditional sizing ratio was to plan on using up to 50% of your physical memory for data and leave 50% of the physical memory for working memory. Current TDI specs recognize that different workloads can vary in their requirements for working memory relative to data volume, so that 50:50 ratio can vary to fit your use case.

       

      If your workload requires you to maintain the same amount of working memory, and if you don't have the option of adding memory to the system (which would be typical for an appliance system), then yes you would want to limit your in-memory data volume to ensure the necessary amount of working memory is available.

       

      Based on the default sizing ratios for NSE, you would allocate up to 20% of your HANA memory as buffer cache. Note that the allocation of memory to the buffer cache is relative the amount of HANA memory, but the recommended or required amount of buffer cache is relative to the warm data volume. The default guidance is a 1:8 ratio of buffer cache : warm data volume. For a traditional disk based database, a typical cache allocaton would be in the range of 10-20% of the data volume, so that 1:8 starting ratio would put you at 12.5%.

      Those aren't physical limits on how much buffer cache can be allocated and there may be use cases where it makes sense to go higher. However, if you are finding that you need to continue increasing the buffer cache size, then you should re-evaluate whether you should be bringing specific pieces of data back into in-memory column loadable storage. The NSE Advisor can help you with that evaluation.

       

       

      Author's profile photo Chandrakanth Angannagari
      Chandrakanth Angannagari

      hi , you mentioned S/4 today as part of data aging today already uses NSE. Can you please clarify this a bit more. I dont remember reading about the need to configure a 'buffer cache' for data aging setup in S4. how is is similar / different from the NSE usage described in this blog here

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      The S/4 data aging framework was built on a HANA feature called "paged attributes". The page attributes feature was a limited implementation of disk paging that was sufficient for the S/4 requirements but was not a full implementation suitable for general purpose use. NSE is the extension and enhancement of the paged attributes capability to be a full disk paging capability suitable for general purpose use.

       

      The S/4 data aging framework continues to use the older paged attributes syntax at the database level which is handled by the NSE.

       

      NSE is enabled by default and the default buffer cache configuration is 10% of HANA memory capacity.

       

      Note that with the S/4 HANA 2020 release in October, the ABAP data dictionary (DDIC) is now NSE aware and it is possible to use NSE beyond just the data aging framework.

       

      Author's profile photo Dmitriy Krivov
      Dmitriy Krivov

      Hi, Robert

      Updated SAP Note 2816823 have next info :

      SAP S/4HANA releases prior to SAP S/4HANA 2020 (that is, SAP S/4HANA 1909 (any SPS) and older), as well as SAP Business Suite powered by SAP HANA

      Use of NSE in an SAP S/4HANA or SAP Business Suite powered by SAP HANA systems outside of the context of data aging is supported under the following conditions:

      • SAP Note 2898319 or a correspondingly higher version of SAP_BASIS is implemented in the system. This ensures protection of the load unit that was set on the database level during table conversion e.g. in the course of upgrade events. This protection is provided for load unit settings on the level of the entire table, individual partitions, and individual columns.
      • You have carefully evaluated the use of NSE for the database object (table, table column, or table partition) in question and verified that performance impact caused by the use of NSE is acceptable.

      It should also be noted that currently the load unit must be defined via database means, e.g. using the SAP HANA cockpit. It cannot be defined in the ABAP Data Dictionary (transaction SE11 -> SE13).

       

      Can we use native NSE configuration (via ALTER TABLE) on S/4HANA 1709 ?  Is all S/4HANA tables supported or just, for example, technical tables ?

       

      Best regards,

      Dmitry

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Hi Dmitry,

       

      Yes, as per that SAP Note you may manually implement NSE at the database level as long as you have applied the documented SAP Notes.

       

      While you can physically configure any object to be page loadable, remember that there is a performance cost to using page loadable data as compared to pure in-memory column loadable data. Hence the point in the SAP Note stating:

      • You have carefully evaluated the use of NSE for the database object (table, table column, or table partition) in question and verified that performance impact caused by the use of NSE is acceptable.

      We would still recommend that any use of NSE in Suite on HANA systems be implemented first as a POC in test environment and ideally work directly with your SAP account team for expert guidance.

       

      Thanks

      Rob

      Author's profile photo Grigory Pogrebissky
      Grigory Pogrebissky

      I heard it will be available for SAP CAR . Does NSE for HANA for SAP CAR require additional lincense? In CAR POS DTA we have big POS transaction tables so it is very important to understand TCO.

       

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      Yes, SAP Customer Activity Repository ("CAR") supports NSE. There is no additional HANA license required for NSE. NSE is a feature of the core HANA server and is available with all HANA versions. I am not aware of any additional license requirements for CAR itself to leverage their data tiering capabilities, but that question should be directed to the CAR team.

       

       

      Author's profile photo Martin Wolf
      Martin Wolf

      Hi, I would like have to be clarified the conditons in S/4HANA standard reports/custom reports when using NSE or the Data Aging Framework.

      As I understood using NSE on HANA 2 SP04 onwards does not have any impacts on existing reports in S/4HANA. Means, I do not have to do any changes, modifications on that when starting NSE in an Brownfield environment (regarding to https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.05/en-US/4efaa94f8057425c8c7021da6fc2ddf5.html

      Instead of that, using the Data Aging Framework for the "Warm" tier functionality means, I have to do some things in exiting reports if I wish to include the aged data further on:

      https://help.sap.com/viewer/669e1da71e744a34af9b86deec50a57c/7.5.17/en-US/fc32744dddb447b990f3184fdac1cf88.html

      (No. 22 + Nr. 5)

      Can you confirm that or did I got a few things wrong?

      Thanks a lot!

      Martin

      Author's profile photo Eugen Pritzkau
      Eugen Pritzkau

      Hi Robert,

      thank you for the block!

      You've mentioned, that NSE can keep additionally up to 4x memory that can be allocated in Hot memory. Does this restriction still exist? I could not find this limitation in official NSE documentation.

      regards,

      Eugen

      Author's profile photo Bert Braasch
      Bert Braasch

      Dear Mr. Pritzkau,

       

      I have to agree with you on this issue. Unfortunately, the information available on the subject is not always entirely clear. To be honest, I have not found this information for SPS04 either. Maybe it is in the functional restrictions for SPS04: https://launchpad.support.sap.com/#/notes/2771956

       

      Unfortunately, this note is permanently in update status since last week, why ever this is still done for SPS04 now.

       

      In the NSE guide, only the buffer cache is discussed: https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.05/en-US/2d772d95d7b6477cbed080f82e38a871.html

       

      How the size of the warm store is calculated is unfortunately not to be found there.

      I would understand these recommendations as general hints for the start.

       

      Depending on the use case, more or less buffer cache may also be required, or more or less warm store may be added; the ratios are not technically determined. According to SAP Note 2927591: SAP HANA NSE data size per SAP HANA system and tenant database is not limited by technical enforcement.

       

      The following statement is also interesting on the subject: https://blogs.sap.com/2019/04/16/sap-hana-native-storage-extension-a-native-warm-data-tiering-solution/#comment-537596

       

      Have you found any information about the maximum size of the NSE Disk Store?

      In the following guide: https://blogs.sap.com/2020/02/20/nsenative-storage-extension-data-tiering-options/

      in the section, On-Premise Sizing the following statement can be found: NSE disk store should be no larger than 10TB for the first release of NSE.

       

      I can't find this restriction either, here I also hope for SAP Note 2771956 when it is no longer hanging in update status. Otherwise, I currently assume that any size of HANA systems will be supported since Scale-Out is also supported in the meantime:

      https://blogs.sap.com/2020/06/25/whats-new-in-sap-hana-2.0-sps-05/

       

      However, this does not apply to BW/4HANA systems:

      Whether or not SAP HANA NSE is used in SAP BW∕4HANA for Data Tiering is decided at the system level: SAP HANA NSE is only used as the data tier for warm data if there are no SAP HANA extension nodes configured for the system, and the landscape is a single-node landscape rather than a scale-out landscape.

      Source: https://help.sap.com/viewer/107a6e8a38b74ede94c833ca3b7b6f51/2.0.9/en-US/a9493e2172294f72847e2293aeeb14cd.html

       

      I am also in the process of gathering information about NSE and data tiering solutions in general. We can also exchange information on this topic directly.

       

      Kind Regards

      Bert Braasch

      Author's profile photo Eugen Pritzkau
      Eugen Pritzkau

      Hi Bert,

      thank you for explanation!

      If I understood you and other blogger correctly, then following memory example would be valid:

      For HANA having total size of 8 TB RAM we should reserve first a half for working memory, so that analysis and queries for in-memory data can be processed.

      Out of remaining 4TB we can decide, how much data should be in in-memory and should be accessed for real-time analysis. Let us take the other 2 TB for such real-time analysis.

      Now, if want to take advantage of NSE at most extent, we devote remaining 2TB to NSE Buffer Cache. According to HANA Guide, the Buffer Cache should not be smaller that 1:8, so for Buffer Cache of 2TB we could put into NSE up to 16 TB less-frequently accessed data.

      Do you agree with my example calculation or do I miss something?

      Thank you in advance,

      Eugen

      Author's profile photo Robert Waywell
      Robert Waywell
      Blog Post Author

      The ratio of hot in-memory data volume to warm page loadable data volume is really dependent on the use case. the 1:4 hot:warm default ratio was intended as very basic guidance as a starting point.

      To give a couple of quick example use cases:

       

      A typical sales system would keep all data in memory for the first 2 years since current year data is active and comparisons of current and prior year data tend to be done fairly frequently. In that scenario it would only be in year 3 that you would start tiering older data to NSE and it would only be after 10 years that you would reach a 1:4 hot:warm ratio.

       

      On the other hand if you are collecting audit data of some sort where even the most recent data isn't usually accessed, then you may want all of that data to go directly to a page loadable table and not keep any of it in column loadable in-memory storage.