Skip to Content
Technical Articles
Author's profile photo Nandish GM

DMO with System Move on Azure – Serial mode

In this article, I will be discussing on migration of on-premise system to Azure cloud using DMO procedure (Serial Mode) and emphasizing on data transfers methods and optimizing it.

There are two options to transfer the SUM directory data to the target host using either a serial or parallel transfer mode.

  • Serial data transfer mode

SUM exports all files to the file system and this file system needs to be transferred manually to the target system

(Source: SAP)

  • Parallel data transfer mode

In this mode we transfer the SUM directory including all files to the target host in phase HOSTCHANGE_MOVE. Then you continue the DMO procedure on the source host. SUM starts to create export files that you copy to the target system. SUM on the target host is started in parallel for import phase.

Note: It is strongly recommended to use parallel mode since shadow operations takes long time. However, for any reason if it is not possible to use parallel mode, we can proceed with serial mode and leverage on AzCopy tool which gives similar advantage as running on parallel mode which is explained further in this article.

Database Migration Option (part of Software Update Manager) gives you an option to migrate your system to HANA/SYBASE as your target database which can be combined with upgrading system and Unicode conversion.


Based on the NetWeaver release of your source system, below SUM version needs to be selected with the latest patch level

  • NetWeaver 7.4 or lower   – SUM 1.0 should be used
  • NetWeaver 7.5 or higher  – SUM 2.0 should be used

(Source: SAP)

Our scenario was lift and shift migration from on-premise to Azure cloud and no system upgrade was involved. DMO was invoked only for migration scenario without system upgrade/Unicode conversion.

We need to add a parameter called migration_only = 1 on the file SAPup_add.par in the path /usr/sap/SID/SUM/abap/bin/



Once the SUM is started, it reads the above parameter and enables SUM for migration only scenario where stack xml is not required as shown in below screen.


Since we are migrating to Azure cloud from on-premise, we need to select the option “Enable the migration with System Move”


Based on the number of CPUs, table sizes and etc, finalize the optimal number for below parameters. It plays  a significant role in package splitting of big tables and further has impact on export and import phase.


SUM continues with configuration, checks and preprocessing phase which would take around 3 – 4 hours.

In the preprocessing phase development environment must be locked and shadow instance will be built.

Since we had already decided to proceed with “SERIAL MODE” option, no action is required in the below step. SUM will proceed further for export phase.


EXPORT phase is completed now. Its time to move the data to target server.



Below table can be used to estimate the time and based on that, choose between an offline transfer or over the network transfer. The table shows the projected time for network data transfer, for various available network bandwidths (assuming 90% utilization).

(Source: Microsoft)

While migrating system from on-premise to azure, data needs to be transferred in 2 phases if we are using online transfer (This approach was used during this migration)

  • Data from source system to Azure Blob storage
  • Data from Azure Blob storage to Target server on Azure

AzCopy – Use this command-line tool to easily copy data to and from Azure Blobs, Files, and Table storage with optimal performance. AzCopy supports concurrency and parallelism, and the ability to resume copy operations when interrupted

We have 2 options while copying data from source to target system via AzCopy tool.

  • OPTION – 1

Once the export phase is completed, copy the entire SUM folder to the Azure blob and from Azure Blob to target server.

For example: It would take 12 hours for copying data(1.5 TB) from source to target with 500 Mbps speed

  • OPTION – 2

When the export phase is started, data can be transferred intermittently to Azure blob and from Azure Blob to target server.

For example, if EXPORT phase runs 16 hours, data can be copied every 3 hours intermittently to Azure blob and from Azure Blob to target server since AzCopy supports parallelism, and has the ability to resume copy operations when interrupted.

In this scenario data is copied to target server parallelly while the EXPORT phase is running and hence optimizing the copying method and saving around 12 hours (compared to OPTION -1 as mentioned above)

Once the data was available on target server, start the SUM and proceed with IMPORT phase of DMO.

We can decide whether we prefer to reuse the current database tenant or recreate existing tenant

NOTE: Before proceeding further, its good to perform disk defragmentation on the target server if the system is already in use.

IMPORT begins as shown below

Import is completed now and should proceed with post processing part of DMO procedure

Once the post processing is completed, DMO procedure comes to an end.

In a nutshell we can still use serial mode for DMO procedure and have the similar advantage as using parallel mode by considering option – 2 which minimizes/eliminates the data transfer time. This will significantly bring down the downtime window of entire migration.

Source :


I blog this article to share information that is intended as a general resource and personal insights. Errors or omissions are not intentional. Opinions are my own and not the views of my employers (past, present or future) or any organization that I may be affiliated with. Content from third party websites, Microsoft, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research.


Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Vijay Chandra
      Vijay Chandra

      I don't see how you say you can save 12 hours compared to option.  In the example, option 1 takes 12 hours to copy the full export dump of 1.5TB to target import file system. I assume the 12 hours is network transfer time only and does not include dump creation time. With intermittent transfers (option 2) you're just moving single files (for eg) as they are being created by the export process into the target import file system.  You're not starting the import like in parallel mode but I can see you just saving perhaps 4 to 5 hours overall.

      Can you explain what does the step 1 12 hours consist of?




      Author's profile photo Nandish GM
      Nandish GM
      Blog Post Author

      Hi Vijay,

      Yes, 12 hours is only for network transfer and does not include dump creation. By creating an simple script ,exported files can be copied to target system as soon as they have been created. In this scenario when the EXPORT is completed, you would have complete dump on Target side with may be 10 or 15 minutes delay maximum. This was achieved in one of our migration scenario.


      Nandish G M

      Author's profile photo Roland Kramer
      Roland Kramer


      Die you saw the #sapfirstguidance Documents - SAP First Guidance – Implement SAP BW/4HANA in the Azure Cloud and SAP First Guidance – Using the new DMO to migrate to BW on HANA?

      It adds additional knowledge to your Blog.

      Best Regards Roland

      Author's profile photo Nandish GM
      Nandish GM
      Blog Post Author

      Hi Roland,

      Thanks for sharing #sapfirstguidance documents related to Azure and DMO which gives information on each phases involved in DMO.

      As aptly said by you, if definitely adds knowledge to this blog and enhances its purpose.


      Thank you,

      Nandish G M

      Author's profile photo AMIT Lal
      AMIT Lal

      Nice Blog! I agree on DMO with move option with parallel is the best option!!!, it uses built-in script which calls Rsync utility we used that for 36TB system completed in 16hrs on Cloud. (any cloud!!) Thanks.

      Author's profile photo XiaoJun Liang
      XiaoJun Liang

      Hi Amit,

      Did you edit the to speed up the data transfer or just use it as it is?




      Author's profile photo AMIT Lal
      AMIT Lal

      Hi XiaoJun,
      Yes, We did edit that script earlier based on our downtime requirements. Now, this script is called by a different name, can't recall.


      Author's profile photo XiaoJun Liang
      XiaoJun Liang

      Thanks for your reply Amit, the script is now called There's the parameter NUM PROC and the vaule is 4. Did you increase this one?

      Author's profile photo AMIT Lal
      AMIT Lal

      Yes, XiaoJun, Its one this parameter in sapup_par file.
      /proc/userenv= DMO_SYSTEMMOVE_NPROCS=16

      Author's profile photo XiaoJun Liang
      XiaoJun Liang

      Thank you AMIT. yes I also set this one. And I could see there were multiple rsync and ssh running to transfer the data during the export but then in half way only one rsync and ssh left till then end. Not sure why that's case.


      I also saw a couple of rync error "error in rsync protocol data stream (code 12) at io.c(600)" but the transfer could still continue.

      I guess the rsync version may better to be identical. My Source is UNIX rsync  version 3.0.5  protocol version 30. Target is Linux rsync  version 3.1.3  protocol version 31.




      Author's profile photo AMIT Lal
      AMIT Lal

      Good to Know, XiaoJun!
      Yes its better to have an identical version on Rsync . Good luck!!

      Author's profile photo Shaswat C
      Shaswat C

      Hi Nandish ,

      Firstly, great information ! Did you use Azcopy on Linux ?

      I have some issue when using it on SLES, it throws an error for .NET Core dependency though I have installed .NET Core . Also did you use SAS to authenticate .

      Finally, is Azcopy preferred in term of speed to simple "scp" while copying between source and target Linux servers ?

      Thanks !

      Author's profile photo Willem Lourens
      Willem Lourens


      The result of the "DMO with System Move" is exactly the same as a "Heterogeneous ABAP System Copy", correct? What would be the benefits of using this method instead of the System copy?

      Which option would have the least system downtime?

      Thanks for the blog.

      Author's profile photo Antonio Rivera
      Antonio Rivera



      Is AZCOpy recommended for parallel mode?





      Author's profile photo IBM Support
      IBM Support

      Thanks for the blog.