Skip to Content
Product Information

DMO: downtime optimization by migrating app tables during uptime

This blog discusses a technique to further reduce the downtime of the Database Migration Option (DMO) procedure of Software Update Manager (SUM). Before reading this content, be sure to be familiar with the general concept of DMO, explained in Database Migration Option (DMO) of SUM – Introduction and in DMO technical background

[Update: blog has been updated on Sept 16th 2019 to reflect general availability]


Downtime-optimized DMO of SUM 2.0 can reduce the downtime of the DMO procedure. It integrates a technology to enable the migration of selected (large) application tables during uptime processing of DMO, thereby reducing the downtime migration time.



During uptime processing, the source system is still available for end users. End user activity in the system may change application tables, so if these tables have already been migrated to the target database (SAP HANA database), the changes have to be recorded and transferred to the target database as well. A dedicated technology offers the required procedure to set triggers on the respective application tables to create log entries, frequently analyze the logs, and transfer the delta to the target database.

Specific considerations compared to “standard” DMO

  • A text file has to be created containing the application tables to be migrated during uptime (each table name in a separate line)
  • The approach is available with SUM 2.0 (SP 06 and higher), which means:
    • The allowed target release of SAP_BASIS is 750 or higher
    • The source system has to be on Unicode already


Known limitations

  • downtime-optimized DMO works for SAP Business Suite systems (not for SAP BW)
  • downtime optimized DMO IS supported for the scenario System Conversion (targeting SAP S/4HANA) as well, if the source system is not yet on SAP HANA database
  • [updated on February 26th 2020:] Restriction is removed with SUM 2.0 SP 07: downtime-optimized DMO IS now supported for targeting SAP S/4HANA 1909


No registration for customers any longer

With SUM 2.0 SP 06 and higher, the approach is generally available without the need of registration.


  • PAS: Primary Application Server (fka CI)
  • PRD: Productive
  • SHD: Shadow
  • TGT: Target
  • CRR: change record and reply technology


Technical background


The initial situation is like for the “standard” DMO:


Again, like in standard DMO, the shadow repository is created by the shadow instance:



The shadow repository is copied from the source database to the target database, the SAP HANA database.

Note that the shadow instance is still existing, although currently not used, but not deleted as in the standard DMO.


Now the trigger for the selected application tables is set up, and the initial transfer of the triggered tables starts.

The triggers are set by SUM internal technology.


Still in uptime, the delta transfer of the application tables is then executed. Therefore, a job starts the CRR replicator on the shadow instance to check for trigger logs, and transfer the delta to the Writer. For the Writer to write the data to the SAP HANA database, we need an additional instance

that uses the target version kernel for the SAP HANA database. This instance is called TMP instance (temporary).


Downtime starts, now the remaining delta of the application tables are migrated.


Now the remaining application tables (that have not been triggered) have to be migrated as in the standard DMO.


The target kernel is now applied to the PRD instance, the system is started to allow the update of the application tables. This is still business downtime.


Once the application tables are updated and the procedure is finished, the system is available again.



You must be Logged on to comment or reply to a post.
  • Hi Boris,

    I am currently involved in a migration project. The project scope is migrating from MSSQL 2012 to ASE 16.0. While I am reading the guides I noticed that SUM has new feature called DMO and when I got deep into it, saw that DMO can only be used for target database ASE on request (min. required NW release must be SAP_BASIS 740 SP9).

    My Business Application Suite version and SP levels are: ERP EHP6, SAP_BASIS 731 SP07 and SAP_ABA 731 SP07.

    In this situation what are your suggestions to continue with? I would appreciate if you could advice.


    Ali Taner

    • Hi Ali,

      well, if the version and SP levels you listed are the start release, you may consider to reach a target level that is supported by "DMO with target ASE".

      As far as I know, SP7 for EHP7 of SAP ECC 6.0 may be coming out soon, and it will be based on SAP_BASIS 7.40 SP09.

      If this does not fit, you will have to use Software Provisioning Manager.

      Regards, Boris

  • Hi Boris,

    some questions/clarifications:

    1- you say the unicode conversion is not possible? it is possible with SLT as long as RFC connections are used to read/write the data. SLT can also cluster/pool/INDX tables very well..

    2- BW are not supported? The majority of replication scenarios for SAP HANA are using the data replication into BW on HANA.

    • Hi Alexey,

      1 - yes, for downtime optimized DMO, is currently not possible to include the unicode conversion. Although the SLT technology is capable to cover the unicode conversion, the integrated usage in DMO does not (yet) allow the unicode conversion.

      2 - BW is not supported as a source system for downtime optimized DMO. I guess the replication scenarios that you refer to have an ECC backend as a source, where changes are replicated into BW on HANA - correct? This is different to the scenario where BW as a source would require triggers.

      Regards, Boris

      • Hi Boris,

        if certain NW ABAP based products like BW or SCM (in case BW functionality is used) are not supported as source systems then please document this accordingly, also in the "Restrictions and Limitations" section of the related SAP note. Currently, this is not transparent.

        Thanks and regards,


  • Hi Boris,

    Thanks for this blog.

    I had a question on the usage of SLT with DMO of SUM.

    You mentioned to select DMIS addon as part of stack.

    What if our landscape already has the latest - DMIS SP08?  and we wish to use DMO of SUM to update/Migrate ?

    Do we manually setup SLT config naming the schema as  SAPSID , replicate the tables we need manually , and later start SUM to update/migrate to HANA ?



    • Hi Jayesh,

      for downtime optimized DMO, the DMIS AddOn is used, but we do not use the setup like for SLT, so there will be no separate SLT replication server.

      The DMIS AddOn may or may not be installed on the source system, meanwhile SUM SP13 can handle both situations.

      If your landscape is using an SLT replication server, this will not be used for downtime optimized DMO. So no SLT setup required on that server.

      Regards, Boris

  • Thanks Boris - Very good content.

    When we say not using other SLT server to replicate large tables ,

      1)  should we use Source system as SLT ?

      2)  will that replicate table data to same schema as target SUM schema i.e. ECC SAP<SID>

    3) if yes SLT configuration replicate data dictionary tables to target schema as default - is that ok to overwrite SUM transfer tables ?

  • Hi Boris,

    May i know the estimate date for GA?

    Also, how many customer had registered themselves for using this tool and what's the outcome?


    Nicholas Chang

    • Hi Bhushan, Nicholas,

      downtime optimized DMO is still "available on request", and we can't estimate when it will be made general available (GA).

      It is not necessarily a consulting service: "available on request" means that customers / partners have to request the usage, and we will decide on project details as well as development capacity on the request. However, it is not a bad idea to have experienced SAP consulting colleagues involved.

      Regards, Boris

  • Thanks Boris,

    I am trying to get the below note to understand optimized part for DMO. However, it is not yet released.

    2005472 - Downtime Optimized Database Migration Option for Software Update Manager



    • Hi Bhushan,

      this note is not released for customers, that it why you cannot access it.

      My hope was that this blog serves to understand "downtime optimized DMO" ...

      Regards, Boris

      • Hi Boris,

        The Downtime Optimized DMO is very much clear with your blog, thanks again for explaining in detail. Basically, for one of our clients, we proposed in-place DMO migration for their ERP system (ECC6-EHP5) on DB2 with size of 13 TB. The client only has the 24Hrs downtime window, and hence we were exploring the Optimized option. However, client wants to be sure the Optimized DMO is not a consulting offering.



  • Boris I have tried opening an incident under component BC-UPG-TLS-TLA but I received a response that it was not the appropriate area.  Is there a different component that needs to be used? 

    Thank you for the blog.  This was very informative.

  • Hi Boris,

    I opened a ticket under BC-UPG-TLS-TLA and have been going back and forth with SAP. They are saying that there is no pilot anymore and I was referred to the standard DMO note 2161397. Can you please help us with this. I can provide the message number.



    • Hi Anil,

      apparently, the processor of your incident is not familiar with the procedure that enables you to take advantage of the "Downtime Optimized Database Migration Option" of SUM. Please send the incident back and ask for forwarding it to the Development Support level. It's not SAP note 2161397 but 2005472 that needs to be considered in your case.

      Hope this helps,


      • Thanks Ronald...I just sent back the incident and asked the processor to forward it to Development support. Will let you know how it goes.



  • Hello,

    Could you please clarify for us if the near zero downtime offering for HANA migration  from SAP and the downtime-minimized DMO described here are two different scenario.



    • Hello Robert,

      not sure what you are referring to as "near zero downtime offering for HANA migration" - is it the nZDT service offering?

      Anyway these are two different technologies.

      Regards, Boris

  • Hi Boris, I opened an mesg126683 in BC-UPG-TLS-TLA regarding the pilot of downtime optimized DMO, and attached all the info from our last normal DMO run in POC HANA. still no response from support. Is this the right queue?



    • Hello Boris,

      we have ran more 4 migration iterations for DMO run on a 20 TB ECC 6 EHP6 oracle system migrating to HANA.

      We have observed very poor performance during the first 12 hours of the migration where just around 2 TB of the data gets migrated from EUMIGRATEDT.logs. In the next 30 hours or so we get around 700 GB/hour throughput.

      We have tried many different options for optimize the migration , i.e. increasing /decreasing the table split options during split and increasing /decreasing the number of R3load processes during migration but that is not helping the speed up of the migration. We have observed the during this first 12 hours the CPU IO on the oracle DB server is always more than 60% and we have observed more than 100 blocked processes in SAR output.

      Keeping all these in mind, we want to explore possibility of migrating APP tables during uptime during this method the blog provides.

      Before opening the massage with SAP, I have a few questions for Borris

      1. My understanding is that this pilot service to analyze the tables by SAP development support is free of cost ?

      2. How much time does the SAP development support takes to get engaged  to analyzie our scenario and recommend solutions around tables which can be migrated online.

      3. After development support provides recommendation, is there any further involvement for SAP professional services during the rest of the project duration.



      • Hello Rajdeep,

        thanks for your question. I'd like to take this opportunity to clearify some aspects.

        1. This is not a pilot service, it is a feature that is piloted. The participation is not free of costs, as SAP colleagues from consulting and/or Active Global Support will have to be involved and will have to be paid by the customer.

        2. This is a program to pilot a feature, it is not a service to analyze your scenario. The time for the project depends on the scope of the project, like involved systems, number of iterations, ...

        3. The involvement of the SAP colleagues is throughout the complete project, not just during a kind of analysis phase.

        My recommendation is to analyze your scenario thouroughly, and of course I can recommend to involve SAP colleagues for this. But this is not bound to the "downtime optimized DMO" feature.

        Regards, Boris

    • Hi Yoh,

      we do have plans to reduce the downtime of a system conversion to SAP S/4HANA: it is called "downtime optimized Data Conversion". The idea is similar: do more during uptime to have less downtime. Technically it is a bit different, and more ambigious. I will try to post more on that soon.

      Regards, Boris

  • Hello Boris,

    thanks for this article. It gives a very good introduction about this topic. I still see the note as not released and no other articles regarding this. Is this still valid? What is the status now for downtime optimized DMO?
    I have found note 2153242 - Estimation of table sizes and downtime for SUM DMO with SLT  that gives a sort of starting point but nothing more.


    • Hello Maurizio,
      thank you for the feedback.
      The status is still as mentioned: currently we do not accept further pilots due to the existing work load. I'd be glad to start with new pilots begin of next year, and will update the blog accordingly, so stay tuned.
      Regards, Boris

      • Hi Boris,


        We are eagerly waiting for optimised dmo to be generally available.  This will help many SAP customers to avoid longer downtime for hana migration of production environments.






  • Halllo Boris,

    We tried using UPGANA.xml file from previous DMO runs for ECC EHP6 upgrade and migration to EHP7 on HANA. SUM we used was SUM 1.0 SP17 PL8.

    When we provided UPGANA.xml in sapup_add.par file in /usr/sap/<SID>/SUM/abap/bin/ it was not accepted by SUM tool.

    We received below error in phase - MAIN_SHDIMP/SUBMOD_MIG_PREPARE/EU_CLONE_MIG_UT_PRP as follows -

    Illegal top level tag analysis in '/usr/sap/<SID>/Download_DIR/UPGANA.xml' - expected 'Clone durations' .

    Can you please help on this?




    • Hello Ambarish,

      you can provide the UPGANA.XML file from a previous run by simply putting it into the download folder. If the error persists, please create an incident on component BC-UPG-TLS-TLA.

      Thanks, Boris

      • Hi Boris,

        Files were copied to the download folder however we received same error for each run.

        We tried reusing UPGANA.xml from 3-4 mock runs and every time same error.

        Also we reused these files for same system with same SID.

        "MIGRATE_UT_DUR.XML & MIGRATE_DT_DUR.XML" were taken successfully by SUM tool.

        Problem as mentioned above occurred for ‚ÄúUPGANA.XML‚ÄĚ file only.

        Could difference in DB size or table growth be the reason behind this?


        • Hi Ambarish,

          there is no need to use SAPup_add.par for both the duration files or the UPGANA.XML, as SUM will consider these automatically if found in the download folder.

          Regards, Boris

  • Hi @Boris

    Just want to check with you is the known limitation updated for SUM SP19? Is BW or other NW products supported for downtime-optimized  DMO?

    Referring to the big picture of SL ToolSet 16, it says BW is supported?




      Hi Nicholas,

      thanks for asking: SAP BW is not supported for "downtime optimized DMO". For SAP BW, the Delta Queue Cloning is the approach to optimize the downtime. We will have to adapt the slide in the referenced picture.

      Regards, Boris

    • Hi Suresh,

      thanks for asking. I have updated the blog now. The procedure is not GA.

      We accept a limited number of additional projects, as mentioned in the blog.

      Regards, Boris

  • Hi Boris,

    Since DMO is not supporting Unicode Conversion, does Downtime Optimize DMO work on non-unicode source too?


    Nicholas Chang



    • Hi Nicholas,
      hope it is OK if I adapt your statement:
      DMO is able to cover the Unicode Conversion, but not for target systems based on 7.50 and higher.
      Concerning "downtime optimized DMO", one of the requirements is that the target system is based on 7.50 or higher. Yes, the conclusion is that "downtime optimized DMO" requires a Unicode source system.
      Regards, Boris

  • Hi Boris,

    after revisited note 2377305 - Database Migration Option (DMO) of SUM 1.0 SP20, source non-unicode system can be converted, upgraded and migrated using DMO as long as the target release is <750

    Only if the target is >= 750, source system need to be on unicode.

    And noticed from 2442926 - Prerequisites and Restrictions of downtime-optimized DMO, allowed target release for downtime optimize DMO was changed to 750 or higher, previously i believe 7.40 was supported.

    Correct me if i'm wrong.



    Nicholas Chang

  • Hi Boris,

    Thank you for excellent blog.

    Can you confirm for the scenario "System Conversion (targeting SAP S/4HANA)", is downtime optimized or nzdt scenario possible? or standard dmo is the only option?



    • Hi Ambarish,
      thanks for asking. I plan to write a blog on this aspect in January.
      Current status is
      • for the System Conversion, NZDT is available, see SAP Note¬†693168.
      • Downtime-optimized DMO is not applicable for the System Conversion.
      • We plan to pilot "downtime-optimized Conversion" for the System Conversion soon - but only for the case that the source system is not yet on SAP HANA database.
      Regards, Boris
  • Hi Boris,

    I hope all is well on your end. Just reading this wonderful blog of yours on the doDMO topic you wrote in 2014. I believe Downtime Optimized Database Migration(doDMO) of SUM 2.0 got a bunch of technical restrictions from SAP product development and it still exists as of today (2020), do you see any of these will go away from a technical perspective in near future?

    • You cannot use downtime-optimized DMO with the scenario ‚ÄúDMO with System Move‚ÄĚ
    • You can combine the scenario DMO without Software Update with downtime-optimized DMO
    • You must switch off other trigger-based technologies such as SAP Landscape Transformation (SLT) when using downtime-optimized DMO.
    • Switch off the ABAP-based archiving during the downtime-optimized DMO.
    • Tables added for Downtime Optimized handling must not be part of Table Comparison
    • Non-Unicode start releases are not supported.
    • Not applicable to the BW system.

      Amit Lal

  • Hi Boris,

    Thank you for sharing.
    A bit update:

    • Current restriction: downtime-optimized DMO is not currently supported for targeting SAP S/4HANA 1909

    Restriction was removed starting with SUM 2.0 SP07.


  • Hi Boris,

    while performing DoDMO with below system we are facing an issue to upgrade the DB version to oracle 12.

    Source : EHP7,Oracle 11g,AIX 7.1

    Target : S4 Hana 1909 FPS00,SLES 15 SP1

    SUM Tool :  2.0 SP8 PL1

    as it is not mentioned for S4 1909 the shadow is created on target DB and the minimum required DB is oracle 11. we are not able to understand this case.

    Do we have any limitation that when we use DoDMO the source db restrictions still exists?

      • Hi Shiva,

        today we have updated the related notes (DMO and downtime-optimized DMO) with the requirement to have Oracle on version 12 or higher for downtime-optimized DMO.

        Regards, Boris

        • Hi Boris,

          apologies for delayed update.

          we have created an incident before updating this here. we got the same response that it has been updated in the notes(DMO and downtime-optimized DMO- 2547309). but the note is still being updated.

          The SUM 2.0 SP08(2882441) has been updated with this information.

          Just FYI : due to some timelines constraint we have now proceed with standard DMO option.

          Thanks a lot for your clarification.


          Shiva P

  • Hi Boris,

    With DMO with System Move(SUM 2.0 SP7) now, Windows is supported, but how to perform a DMO with system move and PARALLEL TRANSFER on Windows, knowing that RSYNC cannot be used. In Linux case, specifically calls the RSYNC tool.

    Robocopy is feasible here? DMO guide did not cover insights on this option. Any inputs will be highly appreciated.


    • Hi Amit,

      good point. I am not a network expert and this is not SUM specific, so my hope would be that there are solutions and/or experts out there that have an idea.

      Regards, Boris

      • Thanks, Boris.

        We are checking internally with SAP Maxattention to understand in detail.

        But it looks like the parallel transfer option available only for Linux at present. DMO SUM guide and SAP SUM Note should add this point exclusively.

        Also, I see SUM 2.0 binaries is renamed to, but can't find anything for windows in SUM repo.

        Again, Thanks so much for your quick input.

        Have a nice weekend!

        Best Regards,
        Amit Lal



      • Hi Boris,

        Can we adjust the transfer performance for the RSYNC script ""?

        It seems the default parameter DMO_SYSTEMMOVE_NPROCS is set to 4. Would 6 or 8 be better if our export size is~ 760GB?




        • Hi Jun,

          thanks for asking.

          The script delivered with SUM is one way to handle the synchronization, you can adjust it, if you are experienced. Concerning the default parameter you mention, it is not as simple to judge on the export size to know the optimal number of processes. Nevertheless you can try to adapt the number by setting an environment variable. You do this by creating SAPup_add.par file (in SUM/abap/bin) and specify the variable with a line in that file, e.g.

          /proc/userenv= DMO_SYSTEMMOVE_NPROCS=8

          Regards, Boris

          • Hi Boris,

            The environment variable is working and I can see the log shows the NPROCS is 8. But at the OS level, I can see only one rsync process. Is the NPROCS a sub-process of the rsync running in the background and can not be visible?

            Also the new value of 8 seems not much difference for the data transfer speed. Would a higher number better? e.g. 16?




          • Hi Jun,

            there are phases in which only one rsync is running, to check which files to synchronize. You can check the rsync process on OS level via "ps -ef" and should see it listed with parameter "--dry-run". If you should see that there is only one rsync even for the actual synchronization time (process with "--files-from"), that would be an issue.

            Concerning the number of processes: as mentioned in my previous answer, there is no general guidance as it depends on various factors like bandwidth.

            Regards, Boris

          • Hi Boris,


            Yes, I saw that there are multiple rsync processes and ssh processes during the synchronization time and I could see the files were being transferred quite OK.

            But then after half way, only one rsync and ssh left and running very slow. I could not the reason why that's case.

            Should the rsync version must be identical between source and target?

            Source is UNIX rsync  version 3.0.5  protocol version 30.

            Target is Linux rsync  version 3.1.3  protocol version 31



          • Hi Boris,


            I have noticed the following issue in the latest test upgrade run over the weekend:


            The upgrade triggers parallel file transfer via rsync. There were multiple rsync running and each one got its own sync list to transfer the files contained in the list. Once those parallel rsync are done, there’s always one rysnc left and it’s transferring files which were already done and exist in the target host. Once this single rsync is completed then a new parallel file transfer for some new files will be triggered and repeating the same process again (parallel rsync, then single rsync for old files).


            That's the reason I am having very slow file transfer. Serial mode would be faster. Do you know why there's always a single rsync transferring the old file again. I checked those the files size in the target hostand there's no difference.