After completed several DMO projects (BW, CRM and ERP), in order to refresh and a good way retaining memory, is to write and share knowledge gained on DMO parallel export/import phase, by observing and studying its behavior via logs, migration and command files in each and every project done apart from those good sources from sap guide and notes, great blogs and doc in SCN.

In this blog, will merely focusing on how DMO works its magic on migration upgraded/updated shadow repository during uptime and application data during downtime to target HANA DB as depicted in picture A and B, step 2a and 2b (highlighted in red)

Picture A: (Source credit to @Roland Kramer and @Boris Rubarth, Thanks!)

Picture B: (Source credit to @Roland Kramer and @Boris Rubarth, Thanks!)

Picture C: Uptime migration to target, HANA Database (extracted from upgrade analysis xml upon SUM-DMO completion)

Picture C shown DMO specific phases behind uptime migration during Preprocessing/ Shadow Import. Will talk about the phases highlighted in red, such as how shadow repository and its objects are created and move to HANA Database, how DMO knows which R3load, kernel and binaries to use whilst there are 2 different databases (in our case, source = oracle and target = HDB).

From the above, we know that upgraded/updated shadow repo created in source is ready to move to HANA Database, where clone size will be calculated in 2 groups:  UT and DT and based on the object in PUTTB_SHD tables.

UT = system table (eg: DD*, REPO* and etc.)

DT = data table

PUTTB_SHD = control tables for shadow import during upgrade, tables that needed to copy and import to shadow

Example syntax for (variables are vary from phase to phase)


Selecting from ‘PUTTB_SHD’ with condition ‘( ( ( CLONE == “U” or CLONE == “B”) and ( SRCTYPE == “J” or SRCTYPE

== “T” or SRCTYPE == “P” or SRCTYPE == “C” ) and SRCFORM == “T” ) or ( ( CLONE == “S” or CLONE == “C” or CLONE == “F”

or CLONE == “G” or CLONE == “T” or ( CLONE == “U” and FLAGNEW == “X”) ) and ( DSTTYPE == “J” or DSTTYPE == “T” or DSTTY

PE == “P” or DSTTYPE == “C” ) and DSTFORM == “T” ) )’.


Selecting from ‘PUTTB_SHD’ with condition ‘( ( CLONE == “D” or CLONE == “C” or CLONE == “G” or CLONE == “T” )

and ( SRCTYPE == “J” or SRCTYPE == “T” or SRCTYPE == “P” or SRCTYPE == “C” ) and SRCFORM == “T” )’.

Directory migrate_ut & migrate_dt will be created by phase EU_CLONE_MIG_UT_PRP and EU_CLONE_MIG_DT_PRP subsequently in /SUM/abap/

Both migrate_ut  & migrate_dt directory contains .CMD, .STR and other files generated by R3ldctl. .TSK files will generated by R3load during export/import with migration result for each table (EXP = Export files ; IMP = IMPORT files)

EU_CLONE_MIG_*T_PRP : Prepare the tables COUNT (*); split tables with certain threshold, produce list of shadow tables and views to be imported, and other details information into the bucket file – MIGRATE_UT.BUC

EU_CLONE_MIG_*T_CREATE: R3load (HANA) run to create table structure in HANA.

How to verify?

There are just MIGRATE_UT_CREATE_*_IMP.TSK but no *_EXP.TSK exist in SUM/abap/migrate_ut_create and SUM/abap/migrate_dt_create. You’ll see object type (T) and action (C) in .TSK files.

Example: Random check on several .TSK file return with object type Table and action Create (in bold)












T PAT13 C ok



Further explain the syntax in .TSK file:

EU_CLONE_MIG_UT_RUN (UPTIME): Entries of *UT* group tables are exported from shadow repo and imported to HANA in parallel. R3load pairs are doing the export and import. The first R3 load (part of the shadow kernel) is exporting the data, the second R3load (part of the target kernel) is importing the data into SAP HANA DB.

Both R3loads are running in parallel on the same host. No export files (dump files) are created because the data transfer between the R3load pair happens through the main memory of the host. This R3load option is called memory pipes (currently only for non-windows hosts).

To understand more, refer to 2 great blogs shared by Boris Rubarth DMO: technical background and DMO: comparing pipe and file mode for R3load

This is proven in MIGRATE_UT_*_EXP.CMD and MIGRATE_UT_*_IMP.CMD file as you can see, ‘PIPE’ is used:


tsk: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_EXP.TSK”

icf: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_EXP.STR”

dcf: “/usr/sap/SID/SUM/abap/migrate_ut/DDLORA_LRG.TPL”

dat: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042.PIPE”

tsk: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_IMP.TSK”

icf: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_IMP.STR”

dcf: “/usr/sap/SID/SUM/abap/migrate_ut/DDLHDB_LRG.TPL”

dat: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042.PIPE”

Also, you can see the updated time for export and import .TSK is identical or close to each other.

Mar 22 11:16 MIGRATE_UT_00010_IMP.TSK

Mar 22 11:16 MIGRATE_UT_00010_EXP.TSK

Mar 22 11:16 MIGRATE_UT_00001_IMP.TSK

Mar 22 11:16 MIGRATE_UT_00008_IMP.TSK

Mar 22 11:16 MIGRATE_UT_00008_EXP.TSK

Mar 22 11:16 MIGRATE_UT_00009_IMP.TSK

Mar 22 11:16 MIGRATE_UT_00009_EXP.TSK

Mar 22 11:17 MIGRATE_UT_00014_IMP.TSK

Mar 22 11:17 MIGRATE_UT_00014_EXP.TSK

By the way, how SUM-DMO knows which R3loads/binaries to use since there’s shadow kernel and target HANA Kernel?

DMO distinguish them with source DB (Shadow Kernel) extracted to SUM/abap/exe whilst Target HANA Kernel to SUM/abap/exe_2nd/ during configuration phase.

Result end of SUM Configuration Phase:

R3load_25-10012508.SAR PATCH    UNPACK_EXE        OK           SAP kernel patch: R3load ,Release: 741

R3load_25-10012508.SAR PATCH    UNPACK_EXE2ND                 OK           SAP kernel patch: R3load ,Release: 741

dw_25-10012457.sar PATCH    UNPACK_EXE OK           SAP kernel patch: disp+work ,Release: 741

dw_25-10012457.sar PATCH    UNPACK_EXE2ND OK           SAP kernel patch: disp+work ,Release: 741

Above phases are run during UPTIME, and only EU_CLONE_MIG_UT_RUN was executed but not EU_CLONE_MIG_DT_RUN. Again, refer to step 2b in both picture A and B, application data will only move to target Database (HANA) once enter to DOWNTIME.

Picture D: Application data migrated to target Database (HANA) via phase EU_CLONE_MIG_DT_RUN:

EU_CLONE_MIG_DT_RUN (DOWNTIME): At downtime, entries of application data table (DT) are exported from shadow repo and imported to HANA in parallel, using the pairs of R3load same as phase EU_CLONE_MIG_UT_RUN.

Lastly, consistency of migrated content is checked by COUNT(*) on each table in the source and in the target database. These can be maintain/manipulate in /bin/EUCLONEDEFS_ADD.LST, reference to /bin/EUCLONEDEFS.LST with option below:

Ignlargercount                 ->          apply when table might change during cloning (reposrc)

Igncount                         ->          table count will be ignored

Nocontent                      ->           table doesn’t exist in HANA (DBATL; SDBAH – DB specific tables)

noclone                         ->            table doesn’t exist (/BIC* – BW temp table)

Hope this blog will helps other to understand more on DMO. Please rectify me for any incorrect statement. Extra input and info share are greatly welcome!


Nicholas Chang

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Jonu Joy

    Great Blog Nicholas, couple of questions

    Is there a limit on the No of R3load process which can be used.

    Can the DMO tool be run from anywhere(additional app server or HANA DB itself )  or is it mandatory to run from PAS

    Is there some kind of tracing which we can enable is R3load fails.

    How do we calculate the physical memory needed for DMO .

    Thank you


    1. Nicholas Chang Post author

      Hi Jonu Joy,

      No of R3load process/Memory is still depends on the number of CPU and Memory resources available,  and the recommendation used in classical upgrade and migration is valid and similiar to DMO. However, you can adjust the number of parallel process dynamically for each phase in dmo, refer to the dmo guide.

      DMO should run in PAS. Procedure and recommendation for normal upgrade and migration still apply to DMO.

      If R3load fails, error message should recorded in /abap/log. To increase the level of tracing, you can refer to note 885441.

      1. Jonu Joy

        Thx Nicholas,

        I am wondering if you knew what command the DMO runs to check consistency of profiles wherein which it can find duplicate and contradicting entried.

        2 ETQ399 Checking consistency of profiles ‘/sapmnt/ABC/profile/ABC_D10_abc05’ and ‘/sapmnt/ABC/profile/START_D10_abc05’.

        2WETQ399 File ‘/sapmnt/ABC/profile/ABC_D10_abc05’ l. 39: Found duplicate entry for ‘exe/icmbnd’ within same profile!

        2EETQ399 File ‘/sapmnt/ABC/profile/ABC_D10_abc05’ l. 141: Found contradicting entry for ‘rtbb/buffer_length’ within same profile!

        I am looking for the command which DMO runs to find these issues.



        1. Nicholas Chang Post author

          No idea. Basically you can resolve those issue by removing the parameter in respective profile (either instance or start). FYI, after upgrade, START and INSTANCE profile will be merged.

  2. Srikishan D

    Hi Nicholas,

    Thanks for the extremely useful information above. We are looking to upgrade and migrate a Netweaver BW 7.31 system (on-premise AIX/DB2) to SAP Netweaver BW 7.4 on HANA (in cloud). While looking at the DMO option we get information like DMO does not support a migration/upgrade from a system on-premise to in-cloud. Have you come across any such finding during your experience?

    Thanks in advance,


  3. Amit Sharma

    Dear Nicholas,

      Please guide me for following,

    1. What caution require while using DMO,

    2. What updates/upgrade /SP require for Source system.

    3.Is any video for the same (end to end) or any option for hands-on for DMO


      Amit Sharma

  4. Anuradha Subhasa


    we have a requirement to capture the phase wise details to develop a tool to get the information on percentage basis. Can anybosy help on this?

    we tried AL11 OS level and we have DMO log files but nothing is working.


    1. Kasivindhkumar Shanmuganathan

      You can try SAP CCMS logmon to read the log file. We tried to do this on some of our upgrade to get the mail in case we get any error or any changes in the phases completion. you need to identify the patterns in the log file.

  5. Gilberto Gangarossa

    Excellent blog Nicholas, I have a simple question:

    After the Upgrade&Migration with DMO from Oracle to Hana of my Develop system, can I use both system ?

    Because I wont to use the not upgraded Develop until the end of the Upgrade, and I will use the Upgraded Develop system for the dual maintenance.

    It is possible ?

    Thanks in advance


    1. Nicholas Chang Post author

      I believe you can by installing a new sap apps and point to the oracle db. However, this is not ideal and might not supported, the ideal way is to perform a system copy of source system and perform the upgrade on the copied system.


Leave a Reply