Skip to Content
Product Information
Author's profile photo Axel Meier

Understanding the SAP Data Lifecycle Manager (DLM) Tool

Understanding the SAP Data Lifecycle Manager (DLM) Tool

Coping with the explosive growth of business data continues to overwhelm IT departments around the globe that maintain large-scale data processing environments. The struggle to satisfy user requirements for data accessibility and performance with more storage and processing power presents an enormous challenge—one that is generally hampered by the budget constraints.

 

Luckily, organizations that count on SAP HANA for their data processing needs can turn to the SAP HANA Data Warehousing Foundation—which includes the SAP Data Lifecycle Manager tool. It allows for the movement data—based on the operational usefulness, performance requirements, and access frequency—to a storage and processing tier with the cost and performance characteristics best suited for that data.

 

Data tiering options for SAP HANA have recently been covered by some of my colleagues. Here’s a great overview on SAP hot, warm and cold data tiering, and deeper dives on warm data tiering with Dynamic Tiering and Extension Nodes, and a detailed look at cold data tiering. Today’s post, and a second more in-depth technical post, focuses on the SAP Data Lifecycle Manager and how it is used to optimize the memory footprint in SAP HANA.

Data Lifecycle Manager (DLM) is a web-based data management tool that enables the SAP HANA data tiering process—relocating aged or less frequently used data from SAP HANA tables—for native SAP HANA applications and SQL Data Warehouse applications. DLM runs in common web browser environments and supports numerous data storage destinations. It is available for both HANA XS-Classic and XS-Advanced Application Server stacks.

 

To relocate hot or warm data from SAP HANA to a specific cold data store for performance improvement, a DLM profile with the required source and target data stores and relocation rules is needed. Data relocation direction can occur from either hot to cold data, cold to hot data or bi-directional to enable two-way movement.

 

A DLM Scheduler, available for both Application Server stacks, provides immediate execution and/or scheduling of a DLM profile. That is, the execution of the data movement based on a specified rule. The unattended or automated execution of a DLM profile supports better query performance for organizations by managing relevant data in a single storage location.

 

DLM also supports multi-tiering scenarios with the option to combine two DLM profiles. The initial DLM profile manages a single partitioned table (multi-store or regular column store table) by moving table-partitions to a Dynamic Tiering Node or an Extension Node for warm data management. The second DLM profile is used to manage the movement of the table partitions (and data) located on the primary Slave Node and/or Dynamic Tiering or Extension Node (hot and warm storage) to a cold storage destination. All supported storage destinations are listed below, based on the Application Server stack in use:

 

DLM XS-Classic supports the following storage destinations, as illustrated below: Multi-Store Table, Extension Node, Extended Table (Dynamic Tiering), SAP IQ and SAP SPARK Controller (Hadoop)

Fig 1: DLM XS-Classic Storage Destinations

 

 

While DLM XS-Advanced supports these storage destinations, as illustrated below: Extension Node, SAP SPARK Controller (Hadoop) and SAP Data Hub Cold Data Tiering (VORA disk-table).

Fig 2: DLM XS-Advanced Storage Destinations

 

Integrating easily into existing HANA centric data-models, DLM hosts a complete set of design and run-time database artifacts that are generated and activated by the tool—eliminating the need for manual data management and manual data access database artifacts. These artifacts, explained in more detail here, include:

  • DLM Data Movement Rule (compiles into a HANA stored procedure)
  • DLM HANA (column store) source table
  • DLM data target table or structure for moving data consistently from a set of connected tables
  • DLM Modeled Persistence Object (MPO)
  • DLM Generated Views (Database Union-All View [GVIEW] and HANA Calculation Scenario or DLM Pruning View [PVIEW]) for access to distributed data sets

 

Better Performance and Lower TCO

Data Lifecycle Manager allows organizations to define a data temperature tiering management strategy to optimize data processing performance by displacing data from SAP HANA persistency to other lower TCO storage destinations. For use in SAP HANA native use cases, DLM provides a tool based approach to model aging rules on tables to relocate aged data to optimize the memory footprint of data in SAP HANA.

 

Be sure to check out my in-depth look at DLM artifacts and data relocation use cases, and please feel free to send any questions or comments my way.

 

Assigned Tags

      5 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Pierre Cassano
      Pierre Cassano

      Hi Axel,

      Great blog, thanks for sharing ththese insights. Quesitons for you...

      1. Why is Dynamic Tiering offer for XS Classic, but but for XS Advanced?
      2. Are there any plans to offer DT for XSA

      Cheers... Pierre

      Author's profile photo Axel Meier
      Axel Meier
      Blog Post Author

      Hi Pierre,

      tks for your comment.

      The DLM (XSA) product version available today is covering the HANA on-premises & SCP compatible storage destinations, where Dynamic Tiering (DT) is only available for HANA on-premises. DLM (XSA) is embedded with SAP WebIDE (full-stack) to enable Data-Tiering either directly within an existing project or as a separate project and it's following the HDI Design-Time artifact paradigm.

      The DLM (XSC) product version is limited to HANA on-premises, where we're supporting all available storage destinations.

       

      Thanks,

      -Axel

      Author's profile photo Arun Sitaraman
      Arun Sitaraman

      Hi Axel,

      Thanks for the nice post. For my own edification…HANA based SAP applications leverage/consume DLM for these aging and NSE features?

      Best,

      Arun

      Author's profile photo Kristin Knight
      Kristin Knight

      Hi Axel,

      We’re trying to set up a POC or DVM.

      We are wondering if we can use DVM on Hana enterprise Version:

      1.00.122.13.1507793622 (fa/hana1sp12)

      In addition to installing the new delivery unit it seems like we need to also enable DVM in solution manager.  Is that required?  Can we skip the solution manager for our POC?

      We simply want to show how we can set up the aging rules and move data over to hadoop.

      Thanks.

       

      Author's profile photo Rajendra Chandrasekhar
      Rajendra Chandrasekhar

      Hi Axel,

      Do we need to have a table in HIVE for DLM to push data from HANA to HAdoop via Spark Controller ?