Phases behind DMO R3load parallel export/import during UPTIME and DOWNTIME to target HANA DB
After completed several DMO projects (BW, CRM and ERP), in order to refresh and a good way retaining memory, is to write and share knowledge gained on DMO parallel export/import phase, by observing and studying its behavior via logs, migration and command files in each and every project done apart from those good sources from sap guide and notes, great blogs and doc in SCN.
In this blog, will merely focusing on how DMO works its magic on migration upgraded/updated shadow repository during uptime and application data during downtime to target HANA DB as depicted in picture A and B, step 2a and 2b (highlighted in red)
Picture A: (Source credit to @Roland Kramer and @Boris Rubarth, Thanks!)
Picture B: (Source credit to @Roland Kramer and @Boris Rubarth, Thanks!)
Picture C shown DMO specific phases behind uptime migration during Preprocessing/ Shadow Import. Will talk about the phases highlighted in red, such as how shadow repository and its objects are created and move to HANA Database, how DMO knows which R3load, kernel and binaries to use whilst there are 2 different databases (in our case, source = oracle and target = HDB).
From the above, we know that upgraded/updated shadow repo created in source is ready to move to HANA Database, where clone size will be calculated in 2 groups: UT and DT and based on the object in PUTTB_SHD tables.
UT = system table (eg: DD*, REPO* and etc.)
DT = data table
PUTTB_SHD = control tables for shadow import during upgrade, tables that needed to copy and import to shadow
Example syntax for (variables are vary from phase to phase)
EU_CLONE_UT_SIZES
Selecting from ‘PUTTB_SHD’ with condition ‘( ( ( CLONE == “U” or CLONE == “B”) and ( SRCTYPE == “J” or SRCTYPE
== “T” or SRCTYPE == “P” or SRCTYPE == “C” ) and SRCFORM == “T” ) or ( ( CLONE == “S” or CLONE == “C” or CLONE == “F”
or CLONE == “G” or CLONE == “T” or ( CLONE == “U” and FLAGNEW == “X”) ) and ( DSTTYPE == “J” or DSTTYPE == “T” or DSTTY
PE == “P” or DSTTYPE == “C” ) and DSTFORM == “T” ) )’.
EU_CLONE_DT_SIZES:
Selecting from ‘PUTTB_SHD’ with condition ‘( ( CLONE == “D” or CLONE == “C” or CLONE == “G” or CLONE == “T” )
and ( SRCTYPE == “J” or SRCTYPE == “T” or SRCTYPE == “P” or SRCTYPE == “C” ) and SRCFORM == “T” )’.
Directory migrate_ut & migrate_dt will be created by phase EU_CLONE_MIG_UT_PRP and EU_CLONE_MIG_DT_PRP subsequently in /SUM/abap/
Both migrate_ut & migrate_dt directory contains .CMD, .STR and other files generated by R3ldctl. .TSK files will generated by R3load during export/import with migration result for each table (EXP = Export files ; IMP = IMPORT files)
EU_CLONE_MIG_*T_PRP : Prepare the tables COUNT (*); split tables with certain threshold, produce list of shadow tables and views to be imported, and other details information into the bucket file – MIGRATE_UT.BUC
EU_CLONE_MIG_*T_CREATE: R3load (HANA) run to create table structure in HANA.
How to verify?
There are just MIGRATE_UT_CREATE_*_IMP.TSK but no *_EXP.TSK exist in SUM/abap/migrate_ut_create and SUM/abap/migrate_dt_create. You’ll see object type (T) and action (C) in .TSK files.
Example: Random check on several .TSK file return with object type Table and action Create (in bold)
UT
T SXMS_SEQUENCE C ok
T SXMS_EO_RETRY_ER C ok
T SXMS_CUST_HDR C ok
T SXMSPLSRV C ok
T SXMSCONFDF C ok
T SXI_LINK C ok
DT
T ARCH_IDX_S C ok
T CRMC_ICSS_REG C ok
T CRMD_DHR_HSLSQUO C ok
T PAT13 C ok
T ARCH_OCLAS C ok
T CRMC_ICSS_IO_ATR C ok
Further explain the syntax in .TSK file:
EU_CLONE_MIG_UT_RUN (UPTIME): Entries of *UT* group tables are exported from shadow repo and imported to HANA in parallel. R3load pairs are doing the export and import. The first R3 load (part of the shadow kernel) is exporting the data, the second R3load (part of the target kernel) is importing the data into SAP HANA DB.
Both R3loads are running in parallel on the same host. No export files (dump files) are created because the data transfer between the R3load pair happens through the main memory of the host. This R3load option is called memory pipes (currently only for non-windows hosts).
To understand more, refer to 2 great blogs shared by Boris Rubarth DMO: technical background and DMO: comparing pipe and file mode for R3load
This is proven in MIGRATE_UT_*_EXP.CMD and MIGRATE_UT_*_IMP.CMD file as you can see, ‘PIPE’ is used:
Example:
tsk: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_EXP.TSK”
icf: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_EXP.STR”
dcf: “/usr/sap/SID/SUM/abap/migrate_ut/DDLORA_LRG.TPL”
dat: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042.PIPE”
tsk: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_IMP.TSK”
icf: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_IMP.STR”
dcf: “/usr/sap/SID/SUM/abap/migrate_ut/DDLHDB_LRG.TPL”
dat: “/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042.PIPE”
Also, you can see the updated time for export and import .TSK is identical or close to each other.
Mar 22 11:16 MIGRATE_UT_00010_IMP.TSK
Mar 22 11:16 MIGRATE_UT_00010_EXP.TSK
Mar 22 11:16 MIGRATE_UT_00001_IMP.TSK
Mar 22 11:16 MIGRATE_UT_00008_IMP.TSK
Mar 22 11:16 MIGRATE_UT_00008_EXP.TSK
Mar 22 11:16 MIGRATE_UT_00009_IMP.TSK
Mar 22 11:16 MIGRATE_UT_00009_EXP.TSK
Mar 22 11:17 MIGRATE_UT_00014_IMP.TSK
Mar 22 11:17 MIGRATE_UT_00014_EXP.TSK
By the way, how SUM-DMO knows which R3loads/binaries to use since there’s shadow kernel and target HANA Kernel?
DMO distinguish them with source DB (Shadow Kernel) extracted to SUM/abap/exe whilst Target HANA Kernel to SUM/abap/exe_2nd/ during configuration phase.
Result end of SUM Configuration Phase:
R3load_25-10012508.SAR PATCH UNPACK_EXE OK SAP kernel patch: R3load ,Release: 741
R3load_25-10012508.SAR PATCH UNPACK_EXE2ND OK SAP kernel patch: R3load ,Release: 741
dw_25-10012457.sar PATCH UNPACK_EXE OK SAP kernel patch: disp+work ,Release: 741
dw_25-10012457.sar PATCH UNPACK_EXE2ND OK SAP kernel patch: disp+work ,Release: 741
Above phases are run during UPTIME, and only EU_CLONE_MIG_UT_RUN was executed but not EU_CLONE_MIG_DT_RUN. Again, refer to step 2b in both picture A and B, application data will only move to target Database (HANA) once enter to DOWNTIME.
Picture D: Application data migrated to target Database (HANA) via phase EU_CLONE_MIG_DT_RUN:
EU_CLONE_MIG_DT_RUN (DOWNTIME): At downtime, entries of application data table (DT) are exported from shadow repo and imported to HANA in parallel, using the pairs of R3load same as phase EU_CLONE_MIG_UT_RUN.
Lastly, consistency of migrated content is checked by COUNT(*) on each table in the source and in the target database. These can be maintain/manipulate in /bin/EUCLONEDEFS_ADD.LST, reference to /bin/EUCLONEDEFS.LST with option below:
Ignlargercount -> apply when table might change during cloning (reposrc)
Igncount -> table count will be ignored
Nocontent -> table doesn’t exist in HANA (DBATL; SDBAH – DB specific tables)
noclone -> table doesn’t exist (/BIC* – BW temp table)
Hope this blog will helps other to understand more on DMO. Please rectify me for any incorrect statement. Extra input and info share are greatly welcome!
Cheers,
Nicholas Chang
Great Blog Nicholas, couple of questions
Is there a limit on the No of R3load process which can be used.
Can the DMO tool be run from anywhere(additional app server or HANA DB itself ) or is it mandatory to run from PAS
Is there some kind of tracing which we can enable is R3load fails.
How do we calculate the physical memory needed for DMO .
Thank you
Jonu
Hi Jonu Joy,
No of R3load process/Memory is still depends on the number of CPU and Memory resources available, and the recommendation used in classical upgrade and migration is valid and similiar to DMO. However, you can adjust the number of parallel process dynamically for each phase in dmo, refer to the dmo guide.
DMO should run in PAS. Procedure and recommendation for normal upgrade and migration still apply to DMO.
If R3load fails, error message should recorded in /abap/log. To increase the level of tracing, you can refer to note 885441.
Thx Nicholas,
I am wondering if you knew what command the DMO runs to check consistency of profiles wherein which it can find duplicate and contradicting entried.
2 ETQ399 Checking consistency of profiles '/sapmnt/ABC/profile/ABC_D10_abc05' and '/sapmnt/ABC/profile/START_D10_abc05'.
2WETQ399 File '/sapmnt/ABC/profile/ABC_D10_abc05' l. 39: Found duplicate entry for 'exe/icmbnd' within same profile!
2EETQ399 File '/sapmnt/ABC/profile/ABC_D10_abc05' l. 141: Found contradicting entry for 'rtbb/buffer_length' within same profile!
I am looking for the command which DMO runs to find these issues.
Thx
Jonu
No idea. Basically you can resolve those issue by removing the parameter in respective profile (either instance or start). FYI, after upgrade, START and INSTANCE profile will be merged.
Hi Nicholas,
Thanks for the extremely useful information above. We are looking to upgrade and migrate a Netweaver BW 7.31 system (on-premise AIX/DB2) to SAP Netweaver BW 7.4 on HANA (in cloud). While looking at the DMO option we get information like DMO does not support a migration/upgrade from a system on-premise to in-cloud. Have you come across any such finding during your experience?
Thanks in advance,
Srikishan
Excellent blog Nicholas. 🙂
I always find your blogs and posts to be very informative and useful.
Dear Nicholas,
Please guide me for following,
1. What caution require while using DMO,
2. What updates/upgrade /SP require for Source system.
3.Is any video for the same (end to end) or any option for hands-on for DMO
Thanks
Amit Sharma
Hi Amit Sharma,
Do a search "DMO" in SCN content and you'll find a lot of userful info, esp:http://scn.sap.com/docs/DOC-49580
Thanks!
Hi Amit,
You can refer to the link below :
Migration to SAP NetWeaver BW on SAP HANA using DMO [Video]
Regards,
Anudeep
hi,
we have a requirement to capture the phase wise details to develop a tool to get the information on percentage basis. Can anybosy help on this?
we tried AL11 OS level and we have DMO log files but nothing is working.
Thanks,
You can try SAP CCMS logmon to read the log file. We tried to do this on some of our upgrade to get the mail in case we get any error or any changes in the phases completion. you need to identify the patterns in the log file.
Excellent blog Nicholas, I have a simple question:
After the Upgrade&Migration with DMO from Oracle to Hana of my Develop system, can I use both system ?
Because I wont to use the not upgraded Develop until the end of the Upgrade, and I will use the Upgraded Develop system for the dual maintenance.
It is possible ?
Thanks in advance
I believe you can by installing a new sap apps and point to the oracle db. However, this is not ideal and might not supported, the ideal way is to perform a system copy of source system and perform the upgrade on the copied system.
Thanks, fortunally for me the SUM SPS20 solved my problem.
I have a doubt, I just wrote to Boris, but this is kind urgent for me:
Reading the note 237738, I have a doubt about the step below
——————< D033068 02/JUL/14 >————————————-
A new BR*Tools package is required for update or upgrade to NW 7.4 SP08 and higher
If you want to update or upgrade your SAP system to SAP NetWeaver 7.4 SP08 and higher, install the latest
BR*Tools package (7.40 patch level 10 or higher) beforehand.
The new BR*Tools 749 are supported only for AIX 7.1, but the operating system release of my source system is AIX 6.1.
So I can’t install the new BR*Tools 749 (R3load, ecc ..) on my source system, which is a prerequisite to start the Upgrade/Migration !
I’m I rigth or it is a false problem ?
Thanks in advance
What's your target DB? If your Target DB is HANA and not oracle, this shouldn't apply to your case. However, if BR*Tools 749 is not supported on your OS, you and use any BRTOOL that is higher than 7.40 patch level 10.
Nice blog Nicholas Chang, just want to understand if we can use DMO with parallel export/import for any cloud (means no DMO with move option) using memory pipe method.
Thank you,
Amit Lal
Hi Amit,
I believe that's the restriction, way before DMO with move option, cross data center/cloud migration with DMO is not supported by SAP. However, logically it should works if you have a very stable and fast vpn connectivity between source and target, and ensure 0% of network glitches. Just my 2 cents worth, you can test it with your own risk. 😉
Thanks,
Nicholas Chang
Hi Nicholas,
Nice Blog 🙂 I have a query here.As in your blog in picture B i see that migration will be done before upgrade.So in case if i have an old release system like ECC 6.0 EHP 3 or 4 & in case of classical migration i would be first upgrading the system to EHP 7 then i would migrate it to HANA but if i run SMO DMO in this system how its going to migrate first because ECC 6.0 EHP 3 or 4 wont be supported by HANA.
Thanks,
Rajdeep.
Hi Rajdeep,
The target release repo will create on HANA during uptime.
Thanks,
Nicholas Chang
Nicholas,
Hello. Some time has passed since this blog. On DMO 2.0, does SAP now support DMO with move option from on-prem to cloud(?) assuming that the vpn and pipe from source to target is fast and stable?
Thanks.
Regards,
James
Hi James,
Hope this blog answers your question.
https://blogs.sap.com/2017/10/17/demystifying-sap-database-migration-tools-for-your-sap-hana-migration-and-cloud-adoption/
Thanks,
Nicholas Chang
Hello Nicholas - I am planning to migrate my CRM / ORA DB system to HANA DB with DMO only (no update/upgrade), but in configuration phase it gives me below error,
Any insight about this error ?
Incomplete Configured BW System
The update process identified your system as a BW system with active process chains
or RDA demons. Therefore the 'Destination for Import Post-Processing' has to be maintained
with the following transactions:
Maintain/customize the destination for the import post-processing
in customizing transaction SPRO following the path:
SAP Netweaver -> Business Intelligent -> Transport Settings -> Create 'Destination for Import Post-Processing'
or call TA RSTPRFC (Create destination in BW standard client for import post-processing) directly.
Details:
BW client set: 011 011
Enter a RFC destination into client 011 in table RSADMINA-TPBWMANDTRFC 011