Skip to Content
Author's profile photo Former Member

SAP HANA Data Warehousing Foundation –Data Distribution Optimizer (DDO)

Recently I had an opportunity to work with a Product Data Distribution Optimizer (DDO) and I would like to share the experience with you all.

Before getting into the details of the product DDO, let’s try to understand the problem statement for why this product and when we need this product.

Let assume a use case:

We have one HANA box and we are running multiple applications on top of it. As show in picture below.

HANA_DISTRIBUTED.PNG

Each application will have their own schemas and tables. When we have multiple applications on Scaled out System (multi Node system), then it all depends upon how the user wanted to handle the system.  Because when we have many applications on a bigger HANA box we face two problems from administration and performance point of view

  • More number of tables
  • More number of Bigger Tables

Problem1: How to handle large number of tables:

1)      First let’s assume that user is not handling anything, and each application has  tables as shown below

  • BW has 8 Tables
  • HANA live  has 3 Tables
  • Custom application has 3 Tables
  • Custom Data Mart  has 3 Tables

And HANA takes a decision to distribute tables equally across nodes for evenly distributing the memory

DISTR1.PNG

So Now

Tables.PNG

Node1 has 3 Tables

Node2 has 3 Tables

Node3 has 3 Tables

Node4 has 3 Tables

Node5 has 3 Tables

Node6 has 2 Tables

How does this look? Tables are equally distributed across nodes thus even distribution of Load, that’s good. But what happens when I query on BW , almost all the nodes get accessed and this may bring your query performance down

2)  When we have Bigger HANA box and running multiple application on top of it, then its better user controls the table location

What do I mean by that? Can user reserve nodes for their application? for an example

  • For BW application Node 1,2,3
  • For HANA Live Node4
  • For Custom App Node5
  • For Data Mart Node6

Table_loc.PNG

Node1 has 3 Tables

Node2 has 3 Tables

Node3 has 2 Tables

Node4 has 3 Tables

Node5 has 3 Tables

Node6 has 3 Tables

How does this look ? Tables are equally distributed across nodes thus even distribution of Load, that’s good. And Tables are also grouped and placed accordingly .This also ensures that when a query is fired from specific application then the number of Nodes accessed is less and which in turn yield you good performance.

Problem 2: How to Handle Large Tables

1)    If user doesn’t handle anything, then we have two problems

  • HANA table has a restriction of 2 billion rows
  • If the table is big then accessing the large table may deteriorate the query performance and also it may utilize large part of memory space.

2)    User is handling the bigger table by smartly partitioning it

  • Each partition can hold up to 2 billion records
  • Only necessary partition will be accessed based on the query so better performance and better memory utilization.

So now it’s clear that when we have multiple applications then it’s better to handle the

1) Table placement

2) Table partition

How to do that ?  You can do it in HANA studio at

  • Table Distribution  –> One table at a time.
    • Advantage is more control over partitioning and table placement
    • Disadvantage is one table at a time

/wp-content/uploads/2015/03/studio1_673062.png

  • Optimize Table redistribution /Partitioning  –> Running at system Level
    • Advantage is running at system level for all tables
    • Disadvantage is less control

HANA_STUDIo2.png

Now understanding the use case and existing solution, let’s get into our Tool “DDO”

The DDO tool is an XS based application developed to handle such administration kind of activities.

  The DDO tool will help you to manage the tables and partitions at system level and with more control in a Scale out landscape.

DIFF.PNG

Use Case of DDO

  • SAP HANA Scale Out System administration
  • Support the HANA administrator in managing the distribution of tables
  • Optimize the allocation of Main Memory within a SAP HANA scale out landscape for:
    • Single applications powered by SAP HANA
      (SAP BW standalone instances)
    • SAP HANA mixed cases (Multiple applications)
    • SAP ERP powered by SAP HANA (planned)
  • Improvement of the SQL performance due to specialized DDO algorithm to manage the table distribution within a SAP HANA scale out landscape efficiently
  • Comprehensive monitoring and logging capabilities for scale out landscapes
  • Complementary to SAP HANA multitenant DB containers, optimization within one container 


Functionalities

  • SAP HANA Scale Out Landscape Overview in regards of setup and data distribution
  • Specify different configurations for reorganization on different levels (e.g. table, table group, schema, location group)
  • Create, adjust and simulate different SAP HANA Reorganization Plans to achieve an optimal data distribution
  • Export and import reorganization plans across system boundaries
  • Schedule SAP HANA reorganization runs
  • Analyze logs of executed SAP HANA landscape reorganization runs

If you like to see how to use the tool and how the tool works you can watch the video


More links:


Hope this is helpful..

Thanks & Regards

  A.Dinesh

Assigned Tags

      5 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Prabhith Prabhakaran
      Prabhith Prabhakaran

      New Topic, New Concept, New Learning

        --> Thanks for sharing

      BR

      Prabhith

      Author's profile photo Henrique Pinto
      Henrique Pinto

      It'd have been great if you had documented how you implemented DDO yourself, the challenges and tips you'd bring up for anyone attempting to do it on their own.

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Hmm sure will try to post it...

      Author's profile photo Tian Song
      Tian Song

      Such a wonderful blog, thanks a lot Dinesh.

      Author's profile photo Former Member
      Former Member

       

      Nice doc