Business Trends
SAP HANA Native Storage Extension: A Cost-Effective & Simplified Architecture for Enhanced Scalability
The SAP HANA business data platform, designed from the ground-up for columnar in-memory data processing, continues to evolve with new features that lower TCO and support larger data capacities for combined transactions and analytics. Meanwhile, the legacy RDBMS technologies are still stuck delivering performance optimizations to their disk-based architectures and, at best, adding in-memory accelerators to support read-only OLAP operations.
SAP HANA Native Storage Extension for Warm Data
One of the key features of the Spring SAP HANA innovations is Native Storage Extension. As a completely built-in WARM data management solution and as an alternative to Dynamic Tiering and Extension Nodes, it is designed to operate larger SAP HANA installations at a reduced TCO. As application data sizes grow, it is not always necessary to keep all data in memory. Depending on relevance to day-to-day business operations, data that is less frequently accessed can be marked as WARM so that it gets loaded into memory only as needed. With a simple architecture and complete SAP HANA functionality support, Native Storage Extension requires minimal changes to the table and column definitions and has no impact on day-to-day management of an SAP HANA installation.
Fig 1: SAP HANA with Native Storage Extension
Load Warm Data Into Memory as Required
SAP HANA tables, columns, and partitions are by default ‘column loadable‘, meaning the entire object gets loaded into HOT memory. To take advantage of the Native Storage Extension feature of SAP HANA, these objects can be specified as ‘page loadable‘ in the DDL as in the examples below:
Table
-
- CREATE COLUMN TABLE T (C1 INT, C2 VARCHAR(10)) PAGE LOADABLE;
Partition
-
- CREATE COLUMN TABLE T (C1 INT) PARTITION BY RANGE (C1) ((PARTITION 0 <= VALUES <10 PAGE LOADABLE PARTITION OTHERS COLUMN LOADABLE));
Column
-
- CREATE COLUMN TABLE T (C1 INT, C2 VARCHAR(10) PAGE LOADABLE);
Convert table to page loadable
-
- ALTER TABLE T PAGE LOADABLE CASCADE;
Fig 2: Native Storage Extension – Technical Architecture
When data stored in the Native Storage Extension is required by applications, the data gets loaded into the Buffer Cache page by page as needed. Once a given page is in the buffer cache, it can be used by multiple queries, without being reloaded. The buffer cache intelligently manages the loading and unloading of pages to minimize disk I/O.
Applications requiring unlimited data capacities can also take advantage of attaching a COLD data tier leveraging a choice of technologies.
Fig 3: SAP HANA with NSE WARM data + COLD data tier to scale unlimited
The Perfect Architecture to Replace Legacy DBMS Technologies
SAP has a general sizing guidance for HOT vs. WARM data ratio and Buffer Cache size. However, it is ultimately the application performance SLA that drives the decisions on HOT vs. WARM data ratio and Native Storage Extension Buffer Cache size.
For the hardware to implement Native Storage Extension, simply add necessary disk capacity to accommodate WARM data and memory to accommodate Buffer Cache requirements as per the SAP sizing guidance.
The combination of full in-memory HOT data for mission critical operations complemented by less frequently accessed WARM data is the perfect and simple architecture to replace legacy DBMS technologies. This also eliminates the need for legacy DBMS add-on in-memory buffer accelerators for OLAP read-only operations requiring painful configuration and management.
SAP HANA Native Storage Extension currently supports native SQL applications with support for SAP S/4HANA and SAP BW/4HANA expected in upcoming releases. For further information, refer to the SAP HANA Native Storage Extension documentation on SAP Help Portal and SAP Notes 2775588 & 2771956. I would also recommend my colleague Robert Waywell’s recent blog post: SAP HANA Native Storage Extension: A Native Warm Data Tiering Solution.
How does your organization currently manage warm data? Look forward to reading your comments.
Thank you very much for the information and blog !!
May i know, if above alter statement for converting this table to 'Page loadable' is online or we need downtime ?
No you do not require downtime to convert the table to page loadable.
OK, thanks ! But although it will not be downtime, but as we do this there will be blocking or locking of objects if transactions are using it ? How to tackle those scenarios and what are the recommendations and best practices of implementing it ?
The overall operation is non blocking.
hello Venkata,
First of all thanks for the informative Blog.
I have couple of questions:
1) Does the Data Aging Framework make use of NSE? If yes to what extent?
2) What are the limitations (if any) with HANA 2.0 SP4?
Hi Venkata,
regarding storage for NSE warm data on disk, is it required extra license cost? is it separately priced based on data volume?
Thanks,
Arash
The statements about so called "legacy RDBMS" are simply wrong. I wonder why this is necessary in such a blog.
Hi Sadanand,
NSE is currently supported for native HANA SQL use cases only. We expect to see SAP applications certify NSE for data aging framework in future time.
With SAP HANA 2.0 SPS04, NSE is permitted to store up to 10TB of WARM data in scale-up configuration only. With the next release of SAP HANA 2.0 SPS05, it is expected to support much higher capacity and scale-out configurations.
Hi Arash,
NSE usage rights are included in the SAP HANA license. However, the NSE Buffer Cache size is treated as in-memory capacity of SAP HANA. No additional cost for data stored in disk.