Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
tobias_koebler
Advisor
Advisor

Hi,

this blog describse how you can enable a parallelized replication with SLT - enjoy :wink:

Important: Before DMIS 2011 SP6

Introduction

Standard SLT replication is single threaded and may share a work process with other replicating tables.  This model works very well for most customer tables.  But it is common for customers to have very large and very active tables which may be changes (insert, update, delete) millions of times per hour and all of those changes must be replicated to HANA.  A single-threaded, shared replication model that is not able to efficiently process entries in the trigger log may be modified to use a balanced multi-threaded model using the process defined in this document.

Understanding the Table Data

The first step in the process is to understand how to ‘split’ the data into balanced threads.  Input from the business owner is very useful at this stage.  The key objective is to separate the data into a manageable number of threads that are of an estimated equal load.  The tables are relational in nature and so we must understand which key fields have the most distinct values and within those fields, how can we best split the data into equal portions.

We will use BSEG as an example as this is the most common table to require this sort of configuration.

  • MANDT (Client) – there is only one relevant value (for the productive client)
  • BUKRS (Company Code) – there are hundreds of values of which L001 and 201 are very large.  Other ranges of values contain large groupings of data.
  • BELNR (Document Number) – should a single company code such as L001 take significantly longer to replicate than all other company codes or groupings of company codes, we can further subdivide for a given company code by document number ranges

You should understand your data and be able balance the replication data best.

SLT Configuration Requirements

Parallel replication uses the SLT load processes that are allocated via the SLT webdynpro (LTR) in the Initial Load field.  So if all of your parallel processes are to run at the same time, and hopefully they will, you will need 1 Initial Load which maps to 1 DTL_MT_DATA_LOAD* batch job.  In the example below, we have 7 unique access plans defined for the replication of table BSEG (RFBLG).  This means that we’ll need at least 7 Initial Load processes defined.

With this in mind, it is a good practice to monitor replication duration and rebalance the select criteria and/or process count.  Maybe you can use fewer threads.  Maybe you will need different data ranges.  Constant monitoring should help to develop the best load balancing model.

The monitoring process will be described later in this document.

How to Configure Multi-Threaded SLT Replication

Assumption: The initial load has completed successfully, and replication has started

1.) Stop the SLT load jobs via the LTR transaction by setting both Total Jobs and Initial Load to 0.  Ignore the error message.  Then via SM37, halt the monitor job.Or halt jobs using the process defined in the SAP LT for SAP HANA Technical Operations Manual.

2.) Review and record the value from field COBJ_GUID in table DMC_MT_TABLES for table RFBLG.

3.) Create a new entry in table DMC_COLL_ACP_HDR

  • Owner:  The same as the contents of CONV_GUID from DMC_MT_TABLES for table RFBLG.
  • ID: 1
  • GUID:  Whatever you like.  It is completely up to you.
  • DESCR:  can be left blank
  • COLL_ACP_STATE: 4
  • TS CREATED: 0
  • TS CHANGED: 0
  • BLOCKSIZE: 8.000.000

4.) Create entries in table DMC_ACS_PLAN_HDR.  The number of entries will depend on the number of ranges (selection criteria) that you want to define.

  • Owner:  GUID from DMC_COLL_ACP_HDR
  • CONVOBJ:  Leave blank
  • ID: starting from 00001 is ok.  I used sequential numbers in my screen shot but it is not necessary.
  • DESCR:  Optional but it might be helpful to describe the associated select criteria defined in DMC_ACS_PLAN_STG.GUID:  Unique value defined by you that links DMC_ACS_PLAN_HDR entry to DMC_ACS_PLAN_STG entries.
  • IS_CALCULATED :  ‘X’
  • TS_CREATED : 0
  • TS_CHANGED: 0
  • BLOCKSIZE: 8.000.000
  • PREC_STATE: Leave blank
  • FAILED: ‘-‘
  • IN_PROCESS: ‘-‘
  • LOADED: ‘-‘TEMPLATE: Leave blank
  • TEMPLATE: Leave blank

5.) DMC_ACS_PLAN_STG:  Define the selection criteria.

Example for BKPF (transparent table):

Note the naming convention for field “STRUCTURE”. There you have to enter the name of the logging table ( /1CADMC/00000507, in this example) prefixed by ‘S_’   if this is a transparent or pool table.

On the other hand, if this is a cluster table, for “STRUCTURE”, you need to enter the name of the physical cluster table, for example, RFBLG, as in the screenshot below:

Adding the ‘AND’ can be tricky. Check the flow in the example. The rule is: Linking multiple conditions for the same field requires an AND. For example: BUKRS >= ‘A’ AND BUKRS <= ‘L000’. But if you proceed from one field to the next, you must not use AND.

6.) Modify DMC_MT_TABLES

  • ACCESS PLAN TYPE:  CAP
  • TABLE MAX PARALL:  Set to the number, at least, of parallel replication processes

7.) Change the Total and Initial Load job count in the Webdynpro (transaction LTR) to the necessary values to manage all loads and replication.

8.) Start the monitor job.

ADifferent Approach

Now that you have read through this document, you may see that it makes much more sense to do things in the following order:

1- Create entries DMC_ACS_PLAN_STG <- this can take a while.  Double-check your syntax!
2- Create entries for DMC_ACS_PLAN_HDR
3- Stop batch jobs
4- Create DMC_COLL_ACP_HDR entry
5- Modify DMC_MT_TABLES entry
6- Start batch jobs


You can complete steps 1 and 2 well ahead of time.

Monitoring Multi-Threaded Replication

Review your entries in DMC_ACS_PLAN_HDR to see which switch to LOADED quickly and which remain in status IN_PROCESS for a long time.  This will give you a clue as to where rebalancing or further subdivision may be necessary.

IMPORTANT:  All access plans for the table must switch to LOADED before they can again start replication.  So if they are out of balance and one of the plans is replicating a lot more than the others, the replication data associated with the other plans will always be waiting for that one plan to complete.  Balance amongst the threads is critical to timely replication.

If you want to ‘push’ things along, you can always reset the status.  Change the value in the Loaded column from ‘X’ to ‘-‘ and replication will start for that plan.

If replication jobs got cancelled, the “IN_PROCESS” indicator will still show ‘X’ even though no job is really working on this access plan any more. To reset replication status:

1) Halt replication batch jobs via Webdynpro or SM37

2) MWBMON->Expert Functions->Reset Access Plan Load Statuses

Set values as indicated in the screenshot below:

Rebalancing Access Plan Deliminations

If you detect that the access plans are out of balance, you can adjust the records in table DMC_ACS_PLAN_STG accordingly, or possibly add more access plans (DMC_ACS_PLAN_HDR) to get a better load balance. However, you of course need to make sure that the delimitations are completely include the range of data but do not overlap. And, while you are changing these settings, make sure that no replication is running in parallel! Otherwise, multiple jobs might process the same data records in parallel, which is not only unnecessary but also implies the danger of inconsistent replication.

Key Fields and Number Ranges

It is important to use the first key fields of the original table for the delimitation. Otherwise, you will face a substantial degradation of the replication performance. In the BSEG example above, we always have specified one value for the client field (first key field), and this is indispensable. Here, if we had one company code which makes up the bulk of the data (and data changes), we could further subdivide using the BELNR (document number) field. But be cautions with delimitations referring to a field which is controlled by a number range! For BSEG it might work, as in this case, usually multiple number ranges are used in parallel. But the worst case scenario is: You have a table with the only key field (or first key field with multiple values, as opposed to the client) being a number range controlled field with only one number range. Then, in order to parallelize, you will specify multiple ranges in such a way that this reflects the value distribution in the logging table at the point in time you define the ranges. However, in the course of time, the inserts will refer to only one of the subsets you have defined, which means, the parallelization will after some time be eliminated, only one of the ranges will get all the new inserts. For such cases, we don’t have a simple solution. The delimitations would then need to be periodically adjusted, for which currently no safe and automated procedure exists.

Another option might be to define a delimitation for a key field that comes later in the primary index of the table, for example, field ZEILE for table MSEG, if this field guarantees a stable distribution of the records. In such a case, however, you need to create a record in table IUUC_PERF_OPTION as shown below, BEFORE you add this table for replication. This makes sure that the secondary index of the logging table is defined in such a way that field ZEILE will be put to the second position, after IUUC_PROCESSED. Only with this changed field sequence, the selection can be done in in an efficient way.

Special Thanks to Günter, Mihajlo and Greg!

Best,

Tobias on behalf of the SLT Dev Team

8 Comments