Skip to Content
Business Trends
Author's profile photo Former Member

How New Non-Volatile Hardware Technology Revolutionizes In-Memory Computing

Digital transformation is occurring whether organizations are ready for it or not. According to Forrester’s Unleash Your Digital Predator, 89% of executives believe digital will disrupt their business in the next 12 months.

Not surprisingly, Gartner also found that 70% of executives believe that IT investment can impact their company’s ability to embrace digital transformation and spur innovation, both on the software and the hardware end.

On the software side of things, SAP HANA 2 allows powerful analytical processing so organizations can build insight-driven applications to stay ahead of the competition.

But how does SAP HANA 2 support them in embracing the hardware technology as well?

Handling More Data in Memory for Even Greater Insight
Acting in real-time requires a modern data platform that can process increasingly large, complex volumes of data, both transactional and analytical, in memory to deliver insights and results the moment you need them.

Customers who are innovating with SAP HANA already deal with large amounts of data, up to 50 TB, they are processing in memory – which compares with app. 500 TB in a traditional database. This is only possible due to superior compression in SAP HANA. And data growth is not slowing down – as data grows, so does the need for our customers to create business value from even larger amounts of data in memory.

Once again, SAP HANA is at the forefront and pioneers by adopting the latest hardware technologies of Non-Volatile RAM (NVRAM) to evolve in-memory computing even further.

SAP HANA will use NVRAM as an extension of classical dynamic random access memory (DRAM) by selectively shifting data structures from DRAM to NVRAM.

This enables SAP HANA to exploit the unique characteristics of both technologies. The best part is that this does not require “heart surgery” on the SAP HANA database, because the design of SAP HANA already perfectly accounts for different memory hierarchies, each of them optimized for a specific purpose with specific characteristics.

As a first major step, the highly-compressed, read-optimized part of the column store – which attributes to over 90% of all data in most SAP HANA systems – is enabled for placement in NVRAM. Having 90% of your data still in-memory, even after a database or server shutdown, means no reload from persistency and maximum performance right from the start.

As it is expected that NVRAM-DIMMs will be both larger and cheaper than DRAM-DIMMs, customers will have a higher memory capacity in total, a higher memory capacity for the same price, or the same memory capacity for a lower price, which directly affects their TCO.

SAP HANA First to Adopt Intel’s New Persistent Memory
At SAPPHIRE this year,  Intel showcased a pre-release version of the Intel Xeon Scalable Family platform with 192 gigabytes of DRAM and 1.5 terabytes of Intel’s persistent memory running on a development version of SAP HANA. The demo showcased several thousand SAP HANA user sessions performing read and write operations using both types of memory. Intel persistent memory handles large volumes of real-time operations such as inserts and updates while DRAM is used for low latency, read operations. By targeting the right operations to each memory type, the system handles larger capacity while maintaining overall performance.

Intel’s persistent memory is a perfect fit to SAP HANA’s architecture, as it allows to integrate what previously were multiple layers in a memory hierarchy into a single layer, combining memory and storage in one memory device. Thus, whole data sets can reside permanently in-memory, providing innovative analytic scenarios that haven’t been possible before.

In addition, because data remains in-memory through power cycles, restart times will be a fraction compared to the time it takes to load all data from disk. And since Intel memory isn’t burdened with the same cost structure as DRAM, IT organizations can achieve massive in-memory capacity at a lower TCO.

A Whole New Era of In-Memory Computing…
With the new Xeon Scalable processor platforms that will be launched this summer, SAP customers can benefit from a performance increase in SAP HANA of up to 59%, using standard DRAM.

Intel persistent memory will be available with a processor refresh of the Xeon Scalable platform in 2018, code-named Cascade Lake. And while some of SAP’s competitors are still playing with Optane simulators in their labs, SAP HANA is working with the leading platform providers to adopt this new technology as one of the first in the market.

The early adoption of Non-Volatile Memory within the SAP HANA database will enable customers to increase SAP HANA performance by a factor of 1,59, increase memory capacities of their servers and lower the TCO dramatically.

…Starts with SAP HANA
Truly digital innovation needs bold ideas – and the courage and drive to turn them into reality. You won’t win in today’s digital economy if you fail to execute on your ideas by just following the competition – you need to lead the pack. Once again, we are not waiting for others to guide us, but continue to stay ahead, making clear to the market that we are not only at the forefront of innovation, but also delivering and executing on our mission: offer our customers what they deserve – the very best technology and solutions available on the market.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      Is the use of persistent ram only available on systems running HANA 2?

      Author's profile photo Martina Bahrke
      Martina Bahrke

      Hi Andre, the use of persistent RAM is only available on systems running SAP HANA 2. The consumption of Intel's NVM technology is planned to be optimized for HANA 2 SPS03.

      Author's profile photo Jens Gleichmann
      Jens Gleichmann

      Hi Daniel,
      currently there is no possibility to set up a table placement to define which table remain in DRAM and which one is placed in NVRAM. High frequently used tables must be placed in high speed DRAM otherwise you will lose performance. So how and when this will be implemented in the HANA core? Without this possibility this is not usable. You just will get a high number of new OSS messages with performance issues.
      Another thing are the sizing rules like core to memory ratio which won't fit any more with the new size of memory with less cores. Currently the most HANA systems are idling with this sizing rules, but how and when this will be changed. At the end with NVRAM this rules have to be changed.

      Same questions in my blog 'NVM – HANA game changer?' =>


      Author's profile photo Martina Bahrke
      Martina Bahrke

      Thanks for your comment, Jens. As Daniel mentions in his blog, we are only at the beginning of our efforts to adopt this new technology. As with every innovation, this evolves by the minute, so make sure to stay tuned for further details and updates. Thanks again for your feedback!

      Author's profile photo Andreas Schuster
      Andreas Schuster

      Hey everyone,

      a new blog describing more technical details just went online this morning.

      Best regards,