Skip to Content
Author's profile photo Williams Ruter

SAP Hana Dynamic Tiering setup on Multi-tenant database with DWF – part 2

In the first part of the documentation i have explain how install and setup Hana MDC with dynamic tiering including the deployment of DWF on tenant database, in the second part of the document i’ll explain now how to configure DWF (DLM part) to create external storage and move table from hana to external storage

Create external storage

The DWF installed I now able to make some movement of table to external destination, but before doing it I need to make create the destination over DLM

Note: When creating a storage destination DLM provides a default schema for the generated objects, this schema can be overwritten

Dynamic Tiering

/wp-content/uploads/2016/05/53_948780.jpg

IQ 16.0

Note: for the parameter to use, the information must be according the SDA connection

/wp-content/uploads/2016/05/54_948781.jpg

SPARK

Note: for spark the schema of the source persistence object is used for the generated objects,

Before to create the remote I have to specify to my index server that I will use my Spark connection for aging data

I run the following sql statement from the studio:

ALTER SYSTEM ALTER CONFIGURATION (‘indexserver.ini’, ‘SYSTEM’)

SET (‘data_aging’, ‘spark_remote_source’) = ‘SPARK_LAB’ WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION (‘xsengine.ini’, ‘SYSTEM’)

SET (‘data_aging’, ‘spark_remote_source’) = ‘SPARK_LAB’ WITH RECONFIGURE;

/wp-content/uploads/2016/05/55_948788.jpg

Also from the Spark Controller, the hanaes-site.xml file needs to be edit in order to set the extend storage

/wp-content/uploads/2016/05/55_1_948789.jpg

/wp-content/uploads/2016/05/56_948790.jpg

My 3 external storage are now created but as we can see they are inactive, so to activate hit “Activate”

/wp-content/uploads/2016/05/56_1_948793.png

Once activated

/wp-content/uploads/2016/05/57_948792.jpg

Move table to external storage

My external storage added to DLM, in order to move table into them I need to lifecycle profile for each of them

/wp-content/uploads/2016/05/58_948797.jpg

Which will allow me to specify if I want to move group of table or only specific table, the way I want to move them “trigger based or manual”

Note: When using SAP IQ as the storage destination type, you need to manually create the target tables in IQ. (use the help menu to generate the DDL)

/wp-content/uploads/2016/05/59_1_948798.jpg

/wp-content/uploads/2016/05/59_948799.jpg

From a destination attribute option you can specify the reallocation direction of the table transfer and the Packet Size to be transfer:

Note: Spark doesn’t support the packaging

/wp-content/uploads/2016/05/60_948800.jpg

Depending on the option chosen above a clash strategy can be define in order to handle unique key constraint violation

/wp-content/uploads/2016/05/61_948801.jpg

Note : Spark doesn’t support the clash strategies. This means that unique key constraint violations are ignored and records with a unique key might be relocated multiple times, which can result in incorrect data in the storage.

Once the destination attribute define you will need to setup the reallocation rule in order to identifies the relevant records in the source persistence to be relocated to the target persistence

/wp-content/uploads/2016/05/61_1_948802.jpg

When satisfied save and activate your configuration, eventually run a simulation to test it.

/wp-content/uploads/2016/05/62_948805.jpg

When the configuration is saved and activate for IQ and DT, the generated object “aka: generated procedure” is created

/wp-content/uploads/2016/05/62_1_948809.jpg

For my document purpose I’ll trigger all my data movement manually

/wp-content/uploads/2016/05/63_948810.jpg

/wp-content/uploads/2016/05/64_948811.jpg

When the trigger job is running, according the rule define in the reallocation rule, the amount of record count should match. For each external destination the log can be check

/wp-content/uploads/2016/05/65_948699.jpg

Query table from external source

Inorder to query the data from the external since that table has been moved, I first need to check in the destination schema the generated object

I can see the 2 tables moved, 1 in dynamic tiering “Inusrance” and the other one as a virtual table fir IQ “Crime”

/wp-content/uploads/2016/05/66_948700.jpg

One additional table “PRUNING” show the scenario and the criteria define from the rule editor for the table

/wp-content/uploads/2016/05/67_948818.jpg

For Spark the schema of the source persistence object is used for the generated objects

/wp-content/uploads/2016/05/68_948819.jpg

My configuration is now completed for dynamic tiering on Hana multi tenant database with DLM.

Assigned Tags

      3 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      nice!
      what with the new Dynamic Tiering with BW4HANA, does your scenario described above, also apply ?
      thanks

      Author's profile photo Williams Ruter
      Williams Ruter
      Blog Post Author

      Hello Glen,
      Yes it does apply too, are looking for something specific from BW4H standpoint ?

      Williams
       

      Author's profile photo Former Member
      Former Member

      no, i'm just trying to clarify DT in terms of B4H. the online course says that DT is supported, but then they used the term extension nodes, whereas DT i thought was extended store(or node).