SoH migration – SUM DMO with system move, first – hand experience – part 1
This blog post covers one-step upgrade and migration approach, performance optimization and downtime minimization for migration of ECC on oracle database to HANA database with EHP upgrade via SUM DMO with SYSTEM MOVE option.
We have recently migrated from ECC6.0 EHP4 on oracle database to ECC6.0 EHP8.0 on HANA database with hardware migration (DMO with SYSTEM MOVE) for a customer.
Main objective for the project was to achieve a downtime between 8-12 hours for a source database of size 5TB. The setting was a bit complex such that DOWNTIME optimized DMO route couldn’t have been opted for and multiple downtimes were not affordable at all.
Now, the challenge was to make this a successful execution within the expected downtime. It was unachievable in the first look but “UNPRECEDENTED TIMES LEAD TO UNEXPECTED LEARNING” and it indeed did. In this blog post I will share details on following :-
- Project planning and approach
- Landscape strategy
- Downtime optimization strategy
- The project planning was done for 3 DMO cycles till production migration as mentioned below except that the number of MOCK runs increased in the actual project.
In below table a DMO cycle denotes 1 full export/import SUM run for SUM DMO with SYSTEM MOVE.
· Impact analysis for code remediation
· Intensive testing in DEV
· Prepare and execute unit test, functional and interface testing
|DMO2||MOCK1 on production||
· End to end Regression testing
· Performance testing
· Functional and interface testing
· UAT and sign-off
|QAS build using DB restore method||
· QAS system will be built from MOCK system using backup and restore
· Interface testing and validation
· Setup DR system replication
|DMO3||MOCK2 on production||
· Recreate all steps that would be performed in production system
· Finalize production downtime
|DMO4||PROD||· Cutover and Go-Live|
DMO1 is for migration of development system where we faced a lot of issues as expected and hence this SUM run took the longest time.
DMO2 is for migration of a MOCK environment copied from production to the HANA server, here we faced few more issues which were known issues and SAP documentation proved quite helpful. The downtime came out to be around 2 days and 11 hours.
We built QAS system with DB restore method and tested DR at this stage.
DMO3 is another MOCK run with latest production data, the SUM run got smoother this time however, the downtime requirement could not be fulfilled, it took about 1 day and 9 hours. So we ran a couple of more MOCK runs to improvise the downtime.
DMO4 is the final production migration.
- Landscape strategy
The diagram below depicts the landscape strategy that we incorporated.
It’s a dual system landscape where old systems (source SAP systems) on old hardware (SOURCE) run simultaneously un-till production Go-LIVE. The dual maintenance strategy has an advantage of minimizing change freeze i.e. no impact on ongoing developments, however it comes with it’s own restrictions.
To quote one of them – any change that is made in source is done in target and vise-versa and the 2 landscapes must be kept in sync. A change freeze is applied when cutover starts.
3. Downtime optimization
- The SAP downtime optimization app https://launchpad.support.sap.com/#/downtimeoptimization proved to be the most useful and effective in getting an insight and comparing different DMO runs. After every run, SUM asks for an upload of UPGANA.XML. Here’s an example on how it looks.
A comparison of 2 DMO runs can also be done as shown below to analyze the differences and improvements in downtime.
- Data cleansing is another important area of any migration project which no body dares to dig into because business users are quite reluctant while perceiving this idea. In our experience, the data cleaning contributed a lot in reducing the data export time and hence reducing the downtime. Here’s how.
a. At the end of DMO run, “DMO POST analysis” provides the durations of all tables, R3load graph, table split order etc. for you to analyze where optimization can be done. for example in below graph we can see that there’s a tail towards the end of export.
Hence, the target of my next simulation would be to reduce this tail.
b. Now to optimize the downtime, we must look at the problem tables or tables that took the largest time in processing. In our case, tables like DBTABLOG, JHAGA, SOFFCONT1 etc. took the most. Therefore, we started with SAP reorg for tables like DBTABLOG, RFBLG etc.
After data cleaning, a database reorg must be performed, we performed a database reorg for all problem tables, even the ones from which data wasn’t deleted and it worked well thereafter.
c. Table splits – As SAP claims, DMO is an optimized tool for HANA migrations in every possible way, you just have to learn how to make the best use of it. Make sure that you know the mechanism of table splits that happens automatically during preparation phases of SUM DMO, I found the blog https://blogs.sap.com/2015/06/11/dmo-background-on-table-split-mechanism/ very helpful in achieving our project goals.
The R3load process count which was provided while defining SUM parameters for UPTIME should be kept same as DOWNTIME (this doesn’t refer to actual uptime or downtime, this is specific to R3load and SQL parameters). Once an R3load and SQL count is established by various benchmark runs, it should fixed during the actual run and must not be altered while SUM is running.
The table split mechanism is based upon R3load (UPTIME) processes and DURATIONS file.
- SUM benchmarking tool, before execution of MOCK, run the benchmarking as mentioned in https://blogs.sap.com/2015/03/17/dmo-optimizing-system-downtime/ to determine the right count of processes for your environment. The CPU count and calculations as per SAP documentation is good in theory for estimation, but to see what is the most efficient number and what doesn’t break the system at the same time must be established through this simulated run.
To summarize, if you want to make the best use of SUM DMO, run the benchmark iterations as many times as required to establish the best suitable settings for your environment.
Always use the latest SUM patch level even if you are in the mid of your project, SAP keeps on improvising each second.
Last but not the least, if you read the SAP documentation thoroughly, you’d always know what SUM is doing in the foreground and what’s going on in the background.
I will cover below points in Part – 2 which contributed the most in optimizing SUM downtime.
- Make definitive use of DURATIONS file (MIGRATE_DT_DUR.XML) in all benchmarking as well as MOCK runs, with every run there will be optimization and improvement.
- Execute SUM export and import in parallel mode and refer to SAP note https://launchpad.support.sap.com/#/notes/1616401 to understand and set it up correctly.
- Last but not the least, a little out of the box thinking can work wonders – STAY tuned for this solution is rarely used but when it works it’s almost like magic.
Do provide your thoughts in the comments section, in case of questions, please use Q&A and post questions in the community and use the tag https://blogs.sap.com/tags/681405860242501232266070960678260
thank you for this comprehensive and helpful blog post! Very much looking forward to part 2 🙂
Thanks for your comments.
Part 2 is out now.
Very prescribed and comprihensive blog, really helpful. Thanks a lot for the posting this blog.
Very informative blog, thank you!
Small question, how big was the SUM directory for your 5TB DB? Any insights how we can estimate how much space is needed?
Thank you for reading the blog post.
The SUM directory was 1.3 TB. The space calculation is based on a lot of factors - extensive custom developments, EHP/SP updates and add-ons as part of the SUM run or any additional installed languages apart from English and German.
We had EHP upgrade, SP updates and a good amount of custom developments.
As for our experience, the space is required during execution phase in source system (export) and from preparation (shadow system creation and import) till SUM completion in target system. For a DMO with system move scenario, dump files are created in the SUM folder that contains compressed source database tables and whole database is exported. The size of export files can be calculated based on the 1:4 compression rate because when tables are imported only the data is considered indexes etc. are not dumped in the load files.
Good blog, well articulated and helpful information. How did you move the dump files from source to target? We are migrating to cloud, just wondering how the files will be moved.
Thanks for your appreciative comments.
About moving the dump files, we configured "rsync" and if you refer to Part - 2, of this blog, we used SAP standard script dmosystemmove.sh which also uses "rsync" and is quite efficient.
Now in your scenario it depends on which cloud you are migrating to and you'd have to find out an efficient solution for file transfer. I'd recommend on trying multiple options and establish the most reliable and accurate method in a test environment.
is this possible to migrate from WindowsNT, Oracle, SAP Basis 7.5 to HANA DB on Linux with DMO System Move (Serial Mode) ?
hope it is OK to draw your attention to the SAP Support Portal page on SUM (Software Update Manager (sap.com)) as it always points to the actual SAP Notes on DMO. Those SAP Notes list the requirements and restrictions.
Judging by the few parameters you listed, it should be supported.
Thanks and kind regards,
Boris (Product Management SUM, SAP SE)