Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
klaus_nagel
Explorer

This blog provides additional details about the new concept in HANA to manage “warm“ data in BW. It is follows up on my blog where I initially introduced this idea: Update – Data LifeCycle Management for BW-on-HANA

What are deployment options for HANA Extension Nodes?

There are basically three different deployment options for extension nodes in HANA system for BW. Which option you choose depends on your landscape, the sizing for the amount of “warm” data in your system, BW release, HW partner, … and, of course, the timeline.

Why does it work for BW?

The standard HANA sizing guidelines allow for a data footprint of 50% of the available RAM. This ensures that all data can be kept in RAM at all times and there is sufficient space for intermediate result sets. These sizing guidelines can be significantly relaxed on the Extension Group, since “warm” data is accessed

  • less frequently,
  • with reduced performance SLAs,
  • with less CPU-intensive processes,
  • only partially at the same time.

The BW application controls and understands the access patterns to BW tables and derives appropriate partitioning and table distribution for “warm” tables. This way BW ensures that a “warm data” table is not loaded completely to memory, but only partially due to efficient partition pruning. The load to memory of the much smaller table partitions is not critical in the usual BW operations (batch load processes).

Based on the modelling object type BW can automatically provide a very good default for the “warm” setting.

  • Up to 50% of BWs data can be classified as “warm” (experience with “non-active” data concept)
  • access to “warm” tables is partition-based in >95% of all cases (write(=merge)&read)
  • data in “warm” tables is part of batch processes in most cases (load-to-memory not critical)
  • query access to “warm” data will be significantly slower – must be accepted/part of the deal

How to classify BW objects as “warm”?

The classification of a BW object as “warm” is part of the modeling task in the corresponding modeling UI. The default for all objects is “hot”.

  • A newly created object classified as “warm” has all its database tables created on the “extension” node(s)
  • An object containing data does not change the location of its tables immediately during object activation, but only changes the metadata of the object. To move the tables there are two alternatives:
    • Execute a table redistribution using the SAP DWF DataDistributionOptimizer (DDO) – this can be seen like a regular house-keeping action,
    • Use transaction RSHDBMON to move single tables/partitions manually.

What type of objects can be classified as “warm” in BW?

This paragraph describes which BW objects can be classified as “warm” and in which BW release the option is available. It does not mean that all these objects necessarily should be classified as “warm” – it depends on the individual use case.

BW Object

Available release

Comment

Caution

InfoCubes

not available

Please look at the options for advanced DSOs

Classic DSOs (exception see below)

not available

Please look at the options for advanced DSOs

DataSources/PSA tables

BW7.4 SP10

A PSA table can be classified as “warm”. PSA tables are partitioned grouping together one or more load requests. Load operations only change the latest partition

--> small amount of data for the MERGE process. Extract operations only use the latest partition in most cases (delta loads).

Write-optimized DSOs

BW7.4 SP10

See PSA comment

Only write-optimized DSOs with usage type Corporate Memory should be classified as “warm”. I.e. no reporting access, no heavy look-up usage

Advanced DSOs w/o Activation

BW7.4 SP10 & BW/4HANA

Partitioning and access similar to PSA

See w-o DSO

Advanced DSOs w/ Activation

BW7.5 SP01 & BW/4HANA

Load and Extract patterns are request/partition-based – similar to PSA tables

DSO-activation needs to load and process the complete table in memory è only aDSOs should be classified as “warm” with very infrequent load activity; use RANGE partitioning of the aDSO where possible to allow pruning

Advanced DSOs with reporting access

BW7.5 SP01 & BW/4HANA

Load patterns are request/partition-based – similar to PSA tables

Query read access may load the complete table (all requested attributes/fields) to memory and query processing may be very CPU-intensive. Only classify objects with

  1. Very infrequent reporting access
  2. Highly selective access (few fields, selective filters hitting RANGE partition criterions if available)
  3. Relaxed performance expectations due to load to memory & less CPU

RANGE- Partitions of Advanced DSOs

BW/4HANA

Selected RANGE partitions of aDSOs can be classified as “warm”.

Load and Read patterns are request/partition-based – similar to PSA tables.

DSO-activation does partitioning pruning and loads and processes the complete partitions to memory --> only aDSOs partitions should be classified as “warm” with very infrequent load activity

What is the impact for the HANA system?

A HANA system with extension node(s) first of all looks and behaves like a standard HANA scale-out system. All operations and features&functions work as before (like system replication, …).

However there are a few things that should be considered:

  • The HANA system can now store more data, which has an impact on backup and recovery times. Especially the higher data volumes on the extension node(s) may now dominate the backup and recovery times – these depends on the hardware for the HANA system.
  • Forced unloads are now very common on the extension node(s). On the “hot” nodes many unloads are a sign of insufficient sizing.
  • In option 3 and – possibly depending on the choice of hardware – also in option 2, the setup of High Availability using host-auto-failover may need to be adjusted. If no dedicated standby for the extension node exists, it may be necessary to explicitly fall back to the original configuration as soon as a failing node is brought online again.
  • For non-BW data the classification “warm” with the re-location to the Extension Node(s) is not supported. If non-BW data is stored in the same HANA DB this data has to be located on the classic nodes.

When will the new concept be available?

Options 1 and 2 are generally released since the Datacenter Service Point (DSP) of HANA SP12.

Offerings for option 3 are still under discussions.

41 Comments