Skip to Content

For optimizing the DMO downtime, we propose to use the duration files of a DMO run for the next run, so that the table split is based on the real migration times, instead of the table size – see blog “Optimizing DMO Performance“.
The duration files list only tables with migration times above a specific limit (tables with short migration times do not have to be considered for table split). We have now detected a small bug in the logic that considers the duration files the current SUM 1.0 SP 20. This may have the effect that the table split adaptation is not enhanced. The bug was already fixed in SUM 2.0 SP 00 and higher, and will be fixed in the SUM 1.0 SP 22 version.

Until then, to still optimize the table split, please consider the following approaches.

1) If you did not yet start the first DMO run:
Set the following parameter in SUM\abap\bin\SAPup_add.par so that all table migration times are considered in the duration files:

/clonepar/stattimelimit = 0

With this parameter, the SUM will create duration files after the migration which can be used to enhance the table split calculation for the subsequent DMO run.

2) If your DMO run has already finished the migration:
you can create new duration files which considers all table migration times with the following statements:

SAPup r3load writeduration durationsfile=MIGRATE_UT_DUR.XML limit=0 log/MIGRATE_UT_RUN.LOG

SAPup r3load writeduration durationsfile=MIGRATE_DT_DUR.XML limit=0 log/MIGRATE_DT_RUN.LOG

The created files (MIGRATE_*_DUR.XML) will be created in the directory in which you started the statements. These files can be used to enhance the table split calculation for the subsequent DMO run.

Boris Rubarth
Product Manager SUM, SAP SE

To report this post you need to login first.

10 Comments

You must be Logged on to comment or reply to a post.

  1. Yong-Wook Kim

    Hi Boris,

     

    Thanks for your information.

    We are using DMO (SUM version 2.0) for hana migration on ECC6.0(ORACLE) system.

     

    I’ve some inquiry for you.

    Can you explain about split mechanism of big tables?

     

    As I know split is based on table size.

    However In my experience, when LOB tables is split,  It seems that without(not considered) LOB field size.

    Can I get a more detail resources of split mechanism?

     

    Best Regards,

    Yongwook.

    (0) 
      1. Yong-Wook Kim

        Hi Boris,

         

        I already read that post. Thanks.

        But I can’t find about LOB table’s.

         

        Does the same mechanism apply to LOB tables?

         

        Best regards,

        Yongwook.

        (0) 
        1. Boris Rubarth Post author

          Hi Yongwook,

          so I would have expected to find this question on the overview blog on table split…

          Anyhow, as far as I remember, the LOB tables get a scaling factor for table split algorithm.
          When duration files are used, this is not the case as the real migration time is then basis for table split algorithm.

          Best regards, boris

           

          (0) 
          1. Yong-Wook Kim

            Hi Boris,

             

            We already had many tests for DMO with duration file.

             

            When some table’s size was changed dramatically,

            I think, there is a weak point in using duration file.

             

            Our test result :

            Source DB’s one table has 1MB at the time of duration file creation

            and it has 5GB at present time.  (From 1MB to 6GB : 6,000 multiple increase)

             

            At the time of duration file creation, that table’s export and import takes just 8,000 second.

            – I don’t know why it takes too long time. Maybe it was running on high load of hardware status(Optimize Compressing).

             

            But now SUM calculate 48,000,000 seconds (8,000 sec * 6,000 multiple)about that table and it was devided by 200 R3load process numbers. (Total R3load process is 450) .

            – About it takes 240,000 seconds (67 Hours)

             

            In this environment, Large table (ex: CDCLS, EDI40) does not splitted

            because SUM calculate EXP&IMP will be completed in 67 hours without split.

            ( Sum of execution times per number of splitted process does not exceed 67 hours.)

            – Of course those were splitted before.

             

            What do you think about this situation?

            I want to know what is your recommend.

             

            Best regards,

            Yongwook.

             

            (0) 
            1. Boris Rubarth Post author

              Hi Yongwook,

              without further analysis, my simple answer would be to repeat the run on the system with table size 6 GB to get duration files better adapted. An increase from 1 MB to 6 GB is not typical, I presume. Using the test cycle option is an easy way to repeat the downtime migration only.

              Best regards,

              Boris

              (0) 
              1. Yong-Wook Kim

                Hi Boris,

                 

                I concerned that another table size will increase dynamically after duration file creation.

                Is there a way to manually specify a particular table to be split?

                 

                If no, Maybe we are doing without duration file and have to get long tail processes.

                 

                Best Regards,

                Yongwook.

                (0) 
                1. Boris Rubarth Post author

                  Hi Yongwook,

                  no – no manual overwriting of SUM table split decision.

                  Are these tables standard tables from SAP that are showing such an increase, or customer tables?

                  Best regards, Boris

                  (0) 
                  1. Yong-Wook Kim

                    Hi Boris,

                     

                    Three CBO tables and Five standard tables.

                    Generated 200 slices for split table ‘ZTP2MMC04030B’ for export+import run.
                    Generated 200 slices for split table ‘ZTP2MMC04020B’ for export+import run.
                    Generated 178 slices for split table ‘ZTA0QMZ20020’ for export+import run.
                    Generated 18 slices for split table ‘/SDF/UPL_LOG’ for export+import run.
                    Generated 201 slices for split table ‘TST03’ for export+import run.
                    Generated 4 slices for split table ‘SXMSCLUR’ for export+import run.
                    Generated 195 slices for split table ‘FPLAYOUTT’ for export+import run.
                    Generated 189 slices for split table ‘FPCONTEXT’ for export+import run.

                     

                    Please refer to attached image.

                    The first table is an example above.

                     

                    Best Regards,

                    Yongwook.

                    (0) 

Leave a Reply