Skip to Content
Technical Articles

DMO: background on table split mechanism

This blog explains the technical background of table split as part of the database migration option (DMO).

As a prerequisite, you should have read the introductionary document about DMO: Database Migration Option (DMO) of SUM – Introduction and the technical background in DMO: technical background.

During the migration of application tables, the migration of big tables might dominate the overall runtime. That is why SAPup considers table splitting to reduce the downtime of the DMO procedure. Table splitting shall prevent the case that all tables but a few have been migrated, and that only a small portion of all R3load processes these remaining (big) tables processes. The other R3load processes would be idle (to be more precise: would not run), and the long tail processing of the big tables would increase the downtime unnecessarily. See figure 1 below for a schematic view.


SAPup uses the following approach to define the tail: if the overall usage of R3load pairs (for export and import) drops below 90 %, SAPup handles all tables that are processed afterwards as being part of the tail (see figure 2 below).


During the configuration of the DMO procedure, you will configure a number of R3load processes, which determines the number of R3loads that may run in parallel. This explanation will talk about R3load pairs that are either active of idle, which is rather a virtual view. If an R3load pair did execute one job, it will not wait in status idle, but end. SAPup may then start another R3load pair. Still for the discussion of table split, we consider a fixed number of (potential) R3load pairs, which are either active or idle. The following figure 3 illustrates this view.


Automatic table splitting

SAPup will automatically determine the table split conditions, and there is no need and no recommendation to influence the table splitting. Your task is to find the optimal number of R3load processes during a test run, and provide the table duration files for the next run. (SAPup will use the table duration files to calculate the table splitting based on the real migration duration instead of the table size; see DMO guide, section 2.2 “Performance Optimization: Table Migration Durations”).

You may still want to learn more on the split logic, so this blog introduces some background on table splitting. Note that SAPup will not use R3ta for the table split.

Table split considerations

Typically, you will expect table splitting to happen for big tables only, but as we will see, the attempt to optimize the usage of all available (configured) R3load processes may result in splitting other tables as well. Still, splitting a table into too many pieces may result in a bad export performance: lots of parallel fragmented table segments will decrease read performance, and increase the load on the database server. A table may be big, but as long as has been completely processed before the tail processing starts, there is no reason to split that table. That is why the tool will calculate a minimum of table splits to balance all requirements.

The logic comprises four steps: table size determination, table sequence shuffling, table split determination, and assignment to buckets. A detailed explanation of the steps will follow below. During the migration execution, SAPup will organize tables and table segments in buckets, which are a kind of work packages for the R3load pair to export and import. During the migration phase, each R3load pair will typically work on several buckets, one after the other.

Step 1: Sorting by table size

SAPup will determine the individual table sizes, and then sort all tables descending by size.

In case you provide the table duration file from a previous run in the download folder, SAPup will use the table migration duration instead of the table size.


Assuming we only had sixteen tables, figure 4 above shows the sorted table list. The table number shall indicate the respective initial positioning in the table list.

Step 2: Shuffle table sequence

Migrating the tables in sequence of their size is not optimal, so the table sequence is reordered (“shuffled”) to achieve a good mixture of bigger and smaller application tables. Figure 5 below tries to illustrate an example.


SAPup uses an internal algorithm to shuffle the table sequence, so that table sizes alternate between bigger and smaller.

Step 3: Table split determination

SAPup will now simulate table splitting, based on the number of configured R3load processes. Note that changing the number of configured R3load processes later during the migration phase will affect the runtime of the procedure.

For the simulation, SAPup will work on “slots” that represent the R3load pairs, and will distribute the tables from the shuffled table list into these slots. Note that these R3load “slots” are not identical to the buckets. SAPup will use buckets only at a later step. A slot is a kind of sum of all buckets, which are processed by an R3load pair.

Initially, the simulation will assign one table from the shuffled table list into each slot until all slots are filled with one table. In an example with only eight R3load pairs, this means that after the first eight tables, all slots have a table assigned, as shown in figure 6 below.


In our example, SAPup has filled all slots with one table, and the second slot from above has the smallest table, so it has the fewest degree of filling.

Now for all following assignments, SAPup will always assign the next table from the list to the slot that has the fewest degree of filling. In our example, SAPup would assign the next table (T7) to the second slot from top. After that, SAPup will most probably assign the next table (T9) to the first slot, see figure 7 below (sounds like Tetris, doesn’t it?).

Finally, SAPup has assigned all tables from the shuffled table list to the slots, as shown in figure 8 below. Note that the figures are not precise in reflecting the table sizes introduced in figure 4 and 5.


As last part of this simulation run, SAPup will now determine which tables to split. The goal is to avoid a long tail, so SAPup will determine the tail, and split all tables that are part of the tail.

SAPup determines the tail by the following calculation: SAPup sorts the slots by filling degree, and the tail begins at the point in time at which the usage of all R3load pairs is below 90%. All tables that are part of the tail – either completely or partially – are candidates for a split, as shown in figure 9 below. As an example, table T2 is shown as being part of the tail.


SAPup determines the number of segments into which the table will be split by the degree by which the table belongs to the tail. The portion of the table that does not belong to the tail is the scale for the table segments to be created. For the example of table T2, this may result in three segments T2/1, T2/2, and T2/3.

SAPup will now extend the shuffled table list by replacing the detected tables by its table segments. Figure 10 shows the example with three segments for table T2.


SAPup starts the next iteration of the simulation, based on the shuffled table list with table segments.

If the calculated tail is negligible (lower than a specific threshold) or if the third simulation has finished, SAPup will continue with step 4.

Step 4: Table and table segments assignment to buckets

The result of step 3 is a list of tables and table segments whose sequence does not correlated to the table size, and which was optimized to fill all R3load slots with a small tail. Now SAPup will work with buckets (work packages for R3load pairs) instead of slots. This is a slightly different approach, but as the filling of the buckets will use the same sequence of tables before, the assumption is that it has the same result.

SAPup will assign the tables of this list to the buckets in the sequence of the list. The rules for this assignment are

  1. A bucket will get another table or table segment assigned from the list as long as the bucket size is lower than 10 GB.
  2. If the next table or table segment is bigger than 10 GB, the current bucket is closed, and SAPup will assign the table or table segment to next bucket.
  3. SAPup will put segments of a split table into different buckets– otherwise two table segments would reside in one bucket, which neutralizes the desired table split.

The first rule results in the effect that a bucket may have more table content than 10 GB. If a table of e.g. 30 GB was not determined for a split, the respective bucket will have this size.The second rule may result in the effect that a bucket is only filled to a low degree, if the following table / table segment was bigger than 10 GB so that it was put into the following bucket.The third rule results in the effect that e.g. for a table with four segments of size 5 GB each, several buckets will have a size of 5 GB. Figure 11 below tries to illustrate this with some examples.


Now SAPup has defined the distribution of tables and table segments into buckets, which in turn are part of a bucket list. All this happens during the phase EU_CLONE_MIG_DT_PRP for the application tables (and during phase EU_CLONE_MIG_UT_PRP for the repository). Note that the DT or UT part of the phase name is no indication whether or not the phase runs in uptime (UT) or downtime (DT): EU_CLONE_MIG_DT_PRP runs in uptime. The migration of application tables happens in downtime during phase EU_CLONE_MIG_DT_RUN. During the migration phase, SAPup will start the R3load pairs and assign the next bucket from the bucket list. As soon as an R3load pair is finished (and closes), SAPup will start another R3load pair and assign the next bucket to this pair, as shown in the following figure 12.


Relevant log files are

  • EUMIGRATEDTPRP.LOG: tables to split, number of buckets, total size
  • EUMIGRATEDTRUN.LOG: summary of migration rate
  • MIGRATE_DT_RUN.LOG: details like R3load logs

Additional considerations

Typically, each R3load pair will execute more than one bucket. Exceptions may happen for small database sizes. As an example, for a total database size of 9992.3 MB and 20 R3load pairs (so 40 configured R3load processes), the tool would reduce the bucket size to put equal load to all R3load pairs. The log will contain a line such as “Decreasing bucket size from 10240 to 256 MB to make use of 160 processes”. Below you see the respective log entry in EUMIGRATEUTPRP.LOG:

1 ETQ399 Total size of tables/views is 9992.3 MB.

2 ETQ399 Decreasing bucket size from 10240 to 256 MB to make use of 20 processes.

1 ETQ000 ==================================================

1 ETQ399 Sorting 10801 tasks for descending sizes.

1 ETQ000 ==================================================

1 ETQ399 Distributing into 20 groups of size 500 MB and reshuffling tasks.

1 ETQ000 ==================================================

You must be Logged on to comment or reply to a post.
  • Hi Boris


    Thanks for the details of how the DMO tool handles large table splitting. Some questions:


    1. Does each bucket result in one TSK, cmd set ?

    2. If so, each R3load pair will work on only one TSK, cmd set or each R3load pair can handle more then one set ?

    3. What if one bucket fails. What is the best way of rerunning that failed R3loads (for that bucket, of course after addressing the issue), instead of waiting for the tail end of the phase ? (I had asked you the same question in the other forum too..sorry for the duplicates).

    4. And a final question on row counts comparison. Though DMO makes sure it reports the discrepancies, if any, for auditing purpose if we want to generate a list of tables and the rows exported/imported, is there a file or log where I can find that information (don't want to go through a count(*) from the source and target database again ! ) ..


    Once again, thanks for all the too detailed technical insight into this DMO process.





    • Hi Raj,

      here are some remarks on your questions:

      1. Yes, each bucket has one control file set.
      2. Each R3load pair will work on one bucket (i.e. one TSK, cmd set) at a time. As soon as this bucket was processed, the R3load processes are done and end. SUM will check this and start the next pair (virtually the "same" pair again) for the next bucket, for the next TSK, cmd set.
      3. If a bucket fails, currently no workaround: you have to wait.
      4. Table comparison: no log file available that lists the table names.


      Regards, Boris

  • Hi Boris Rubarth


    Thanks for always sharing!


    btw, i got a question for below statement.


    "SAPup will now simulate table splitting, based on the number of configured R3load processes. Note that changing the number of configured R3load processes later during the migration phase will affect the runtime of the procedure."


    What will happen if i add extra (eg:5 pairs) R3load procesess during the DMO export/import phase via set procpar after the simulation had performed for initial R3load processes configured, will the newcoming R3load processes being utilized for the export/import?



    Nicholas Chang

    • Hi  Nicholas Chang,


      adapting the number of R3load processes during the migration will be taken into consideration by SUM. It will not happen immediately, but as soon as the next R3load pair is done, the number of active R3loads is considered.


      Regards, Boris

  • Dear Boris Rubarth,


    Apart from above questions, I want to add 3rd ques :


    "Still, splitting a table into too many pieces may result in a bad export performance: lots of parallel fragmented table segments will decrease read performance, and increase the load on the database server"


    As DMO is calculating the splits based on no. of R3load processes, how to decide the correct value ? If we take more R3loa,  splits will be more, then above statements state the export performance could be impacted because of high sequential read. If we keep very less R3load, splits will be less and import into HANA would perform bad.




    • Dear Ankit,


      the number of R3load processes should be as high as possible, related to the performance of the application server used. A test run is required to balance the number of R3load possible, and the next run should be started with this optimal number.


      Regards, Boris

      • Hello Boris,


        As you mentioned, if the splits are more, there could be high sequential read impact. How can we foresee and avoid ?




      • Hi Boris,

        I tried several benchmark runs to experiment with a different number of R3load process.

        With R3load >4x CPU cores, the CPU maxed around 40-50% and runtime started to increase.

        With 15x setting (just to see if I will exhaust resource), there were around 7k packages in 350 process slots, and total runtime was much higher than 2x setting (16 minutes vs 8 minutes).

        As there was no bottleneck in CPU, Memory, Network, and IO for PAS and DB server, I wonder how we could utilize a powerful application server for parallel process.


          • Hi Boris,

            Thank you, and yes, I read that blog and that’s what I came up with.
            R3load allocation is not stable, e.g fluctuate between 20-60 process, like a heart beat, and allocating more process will not improve runtime.

            My example used:

            PAS: 32 CPU, 32GB RAM, 100GB SSD
            DB: 32 CPU, 32GB RAM, 1TB SSD, DB Size: 200GB
            Network: 10Gbps
            Software: SLES 12 SP4, ECC EhP8, DB6 11.5, SWPM 2.0 SPS07

            Here’s 70 R3load processes. 100% DB and 100% table sample.

            And here’s 100 R3load processes. 


            And also got long tail even table duration was used.



  • Dear Boris,


    During phase EU_CLONE_MIG_DT_PRP I've noticed that one table has an index with 12 fields, and these fields are being used to do the split, so this process is running for ever, more than 3 days now. I want to influence the fields to be used using file EUCLONEDEFS_ADD.LST. My question is, if I stop now the SUM processes and create the file with new fields to be used, will this new file be picked up without needing to do the split again from the scratch in all of the other tables that have already finished?


    Kindly advise.

    Kind Regards,

    Claudio Quina

  • I am currently running the ECC upgrade & migration to HANA, during the execution phase (downtime) we have been seeing all the logs of export/import and it was showing the total time spent by EXP R3load and export PIPE but as soon as this phase completed, all the logs for exp/imp got deleted automatically, just CMD/STR/OUT files are remaining in migrate_dt directory. please let me know how i can analyse my whole migration now...other thing, is this normal, could i have saved my logs by skipping any step?

  • Hi Boris,

    Using the benchmarking tool I have determined that 280 is the optimal number of R3load processes to be used for my ECC migration. So while executing DMO on the production system do I need to just give 280 R3load as soon as I enter downtime & continue till the end of downtime or should I start with a lower number of R3load process & keep increasing slowly?


    • Hi Kenneth,

      the recommendation is as follows:
      after you have used the benchmarking tool to determine the optimal number of R3loads, you should have a DMO test run with this number of R3loads, and check whether it is also the optimal number for DMO. Once you have your number, you enter this number for the downtime processing and keep it during the complete procedure.
      Hope you have already checked the following blog on DMO performance optimization:

      Thanks, Boris

  • Hi Boris,

    we did a few test and got 300 R3loads as optimal number for a minimal downtime.

    However during table split procedure - which is done during online phase - we experience a very high load on the system.

    Many processes with "SAPuptool splittable table=/...."

    Is there any possibility to reduce the amount of processes during uptime? Similar to old MigMon where we were able to configure number of R3ta processes,



    • Hi Christoph,

      the configuration for R3load distinguishes between R3load for uptime and R3load for downtime. You can configure the R3load uptime processes to influence these processes you have seen.

      Regards, Boris


    Hallo Boris,


    We recently did migration of sap ewm system from Windows/SQL to windows/Hana using sum dmo 1.0 sp20.


    There were no errors during downtime, and it took 8 hours to finish downtime phase.


    When we compare time mentioned in upgana.xml it doesn't match with actual time. Upgana shows only 2 hours of downtime phase.

    Eumigratedtrun.log does show correct time values.




  • Hi Boris,

    One quick question, is the table shuffling mechanism apply to DMO system move (serial/parallel), FILE mode too?

    I  realized there's no shuffling of table where table are imported to HANA with DMO system move based on the descending order of table size, which is not very optimal where all large tables are started at the begin phase.

    Hope to hear from you soon.


    Nicholas Chang

  • Hi Boris,

    I currently have a problem with a tail and what I can see three tables are causing the issue. S001, S003 and S006.

    Table sizes are just around 10 GB, but the rowcount is almost 200 million. As per the duration file, these tables start relatively late, two thirds into the migration and runs with minimal CPU utilization for over 5 hours in the end. For the next iteration, providing the duration file to SUM, SUM doesn't split the tables or prioritise the order differently, leading to that the tail is still there.

    What is your view on how to handle this situation?

    Best Regards



  • Hi Boris,

    Thanks for your detailed explanation, Could you please clarify my query regarding below part:

    "SAPup will now simulate table splitting, based on the number of configured R3load processes. Note that changing the number of configured R3load processes later during the migration phase will affect the runtime of the procedure."

    So if we keep n number of R3load during EU_CLONE_MIG_DT_PRP phase, then do we have to maintain same n number of r3load during export also. My guess it can be increased in export phase compare to table splitting phase as export will be in downtime.



  • Hi Abhirup,

    not sure if you mix up something: we configure the number of R3loads for processing in uptime and downtime separately. SAPup considers the number of configured R3loads for downtime processing for the calculation of table splitting, although this calculation happens during uptime.

    That is why the number should be kept, same value for calculation phase and for downtime migration. But this applies to later testings, in first testing runs, you may adapt the number of R3load processes to find the best fit, then take this number for next run and keep it stable.

    Regards, Boris

  • Hello Boris,

    we are at the middle of a migration from Oracle to Hana; the biggest object we have is a LOBSEGMENT which is obviously associated to to a table. So when sum is doing the split by it self, it checks the table which is very small, so the table is not being split.

    The problem is during the export SUM starts exporting this table (at the end is exporting the contents from the LOBSEGMENT) using only one R3load process, since this LOBSEGMENT is almost 2Tb, the export takes about 3 days, which is unacceptable.

    How can you force SUM to take into account LOBSEGMENTS in order to speedup the process? or how can you manually add a table to the split list?

    • Hello Jorge,

      For pool and cluster tables, SUM uses a factor to consider extra time, but this is not used for LOBSEGMENT. I guess this is a typical case for which the duration files are helpful, as they contain the migration time. This is a better foundation for table split algorithm than table size.

      Regards, Boris