Skip to Content
Technical Articles
Author's profile photo Radhika Chhabra

SoH migration – SUM DMO with system move, first – hand experience – part 2

Hello Avid readers,

As promised, part2 is in continuation to my blog post https://blogs.sap.com/2022/03/25/soh-migration-sum-dmo-with-system-move-first-hand-experience-part-1 

In this blog post we will cover the following.

  1. How to effectively use DURATIONS file (MIGRATE_DT_DUR.XML) to reduce downtime.
  2. An out of the box solution to fast forward the downtime stage.
  3. SUM parallel mode execution

Let’s start with how MIGRATE_DT_DUR.XML file can contribute.

  1. Durations.xml

When you run one complete DMO cycle, it creates the MIGRATE_DT_DUR.XML file under /usr/sap/<SID>/SUM/abap/analysis directory and is available until you cleanup SUM. The best way to use the durations file is to use the latest one from every successful run. As an example, if in your landscape you migrate the DEV system first, the duration file generated from this run might not be very helpful for QAS system but if your quality was refreshed from production, then the duration file generated from QAS migration can be used for your next system (production copy).

A durations file from your full export-import benchmarking results can also be used for your next SUM run. i.e. when SUM runs on a system for the first time, it estimates the export/import runtime and table split algorithm based on table type, size, number of R3load processes and a lot of other factors, but if this data is already provided to SUM, it’d optimize the export and import process and run much faster.

We were lucky that we could run many MOCKs, but if you cannot, then run only 2 MOCKs for DMO as mentioned below.

MOCK1 –

Run a plain SUM DMO lifecycle on a production copied system and extract all necessary information like MIGRATE_DT_DUR.XML, table split information and UPGANA.XML (which is also uploaded and sent to SAP for DMO app, https://launchpad.support.sap.com/#/downtimeoptimization).

Data cleansing and reorg which was discussed in PART1 can also be done in this environment for business critical tables to a bit of a room for planning these activities in production.

MOCK2 – 

For this second MOCK run, make sure that SAP housekeeping activities for identified tables and database reorg is complete, DB statistics are up to date and MIGRATE_DT_DUR.XML is added to SUM download directory. Now run several benchmarks and measure downtime, you’d already notice an improvement as compared to MOCK1 because of the changes that were done.

Every time, you run a benchmark, use the latest MIGRATE_DT_DUR.XML because the table durations, split and sorting for your next run will be based on your previous one and when you finally run the last benchmark, keep that duration file for your MOCK 2. You’d see following when durations file is used by DMO.

 

 

We saw a 50% improvement in export duration and downtime by providing durations file. Below picture speaks for itself.

If this MOCK run fulfills your downtime requirement, then you are all set.

We however were not happy with this downtime and wanted further improvement. So we looked for other options. I must admit that this was one of the most stressful times and we looked everywhere for a solution.

2. The magic solution

Most of the times when a hardware migration (system move) is involved the source system is an old generation server with low performance as compared to the target hardware which is high end and HANA compatible. One of the reasons of longer export durations during downtime is hardware performance in many cases. To overcome this, we must think of ways to either upgrade the hardware or increase the resources if possible. we didn’t have room for any of this and here’s what we resorted to which I call “MAGIC

Here we leveraged the flexibility that SAP offers in terms of heterogeneous installations. The source OS was HP-UX and target was Linux. We used one of the the target AAS on Linux to connect with source DB and CI and started SUM on this AAS. Our aim was to leverage the better performing hardware of this AAS to perform the export.

The only pre-requisite this arrangement requires is the ASCS split on source and a matching kernel version. Mounting of /trans and /sapmnt is not mandatory for it’s a complex process to mount a file system from a different OS and we do not want any executables synchronization either.

An important point is to create a batch server group pointing to this AAS and make sure that SUM runs all of it’s batch jobs on this server otherwise you’ll get into unnecessary batch job related errors.

Here’s how this scenario would look like

 

 

Now that your SUM runs on AAS with an upgraded hardware, you’ll be able to increase the number of R3load, SQL and background process parameters in SUM. The export duration which was around 12 hours got reduced to 1 hour and 13 minutes. Isn’t that unbelievable? the results are shown below, doesn’t this look like a perfect R3load graph without any tail?

 

This was a major breakthrough and we could finally meet the downtime requirement. A comparison view between final migration and MOCK1 looks like below.

 

Using above solution not only reduced the export duration, but also helped in PARALLEL execution and file transfer process to the target SUM. In a heterogeneous migration where source and target OS is different, rsync setup must be done manually and it sometimes lead to performance issues if you are not an rsync expert. But when we could use the SAP delivered  dmosystemmove.sh (which is delivered for Linux only environments), it worked like a charm.

3. Parallel execution

The parallel execution mode is the most efficient option for a migration, refer the blog post https://blogs.sap.com/2020/10/22/dmo-with-system-move-with-shd-rep-on-target-db/

The best part of a SoH or S/4HANA migrations is that you have an assurance that your target database is nothing less than a rocket. All you have to do is set the right parameter and and test those in MOCK migrations and analyze the impact. Please do go through the recommendations provided in SAP note  https://launchpad.support.sap.com/#/notes/2600030

and run SQL statements that are described in https://launchpad.support.sap.com/#/notes/1969700

It is also important to check the source system parameters as suggested in https://blogs.sap.com/2015/03/17/dmo-optimizing-system-downtime/ and SAP note http://service.sap.com/sap/support/notes/936441

 

Last but not the least, Always believe in yourself.

Do provide your thoughts in the comments section, in case of questions, please use Q&A and post questions in the community and use the tag https://blogs.sap.com/tags/681405860242501232266070960678260

 

Regards

Radhika Chhabra

Assigned Tags

      4 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Boris Rubarth
      Boris Rubarth

      Hi Radhika,

      again congrats for this detailed explanation including experiences and relevant links!

      Maybe you can elaborate a bit more on those two aspects:

      1. Using an Additional Application Server (AAS) with better performance (to run SUM on it) is a good idea. You used the "target AAS", so I guess a host that was initially dedicated to the target infrastructure. Is it correct to assume that source and target landscape / hosts are located in the same data center?
      2. "Mounting of /trans and /sapmnt is not mandatory ..." - so the SUM did not complain on that aspect?

      Thanks and kind regards,
      Boris

      Author's profile photo Radhika Chhabra
      Radhika Chhabra
      Blog Post Author

      Hi Boris,

      Thanks for reading and providing the feedback, here are my comments which I will also add to Part-2.

      1. Yes, The data center location was the same, therefore we had the benefit of a good network connection between source and the target hardware.
      2. SUM did not complain about a shared /sapmnt and we used a local /trans directory on AAS for SUM.
      3. To further elaborate, we updated the kernel on AAS to match source DB and while we did this, we had to first unmount the target /sapmnt from AAS in order to avoid profiles and exe files from getting synced with target SAP installation. And because we manually adjusted the AAS, we did not face any challenge as such while executing SUM.
      4. The trans directory, SUM log directory, batch job execution and SYS logs access during SUM execution did pose some trouble because all of these should be accessible from CI and AAS. In our scenario we resorted to run all SUM related batch jobs on AAS instead of distributing it to other source system App servers.

      Thanks and regards

      Radhika Chhabra

      Author's profile photo Premkishan Chourasia
      Premkishan Chourasia

      Hi Radhika, Really appreciate the enthusiasm that you have shown in writing this blog, especially the approach, key points to watch-out as part of MOCK runs and the MAGIC stuff !

      I have one query : In your blog, on the usage of AAS from target machine did the source database not create any bottleneck ?

       

      Regards,

      Prem

      Author's profile photo Radhika Chhabra
      Radhika Chhabra
      Blog Post Author

      Hi Premkishan,

      Thanks for reading the blog post. The AAS from target machine was first disconnected from target database and shared directories. The kernel then had to be updated to match the source database supported version, we used same kernel version as was on source application servers.

      And once all of this is done, there were no bottlenecks though you do have to test this scenario in a MOCK environment first and I'd suggest a test of connectivity of AAS in production environment to plan the action accurately.

      We also removed all dialogue work processes from AAS so it doesn't entertain any user requests and created a separate background server group to keep it as a dedicated server for SUM run.

      Regards

      Radhika