Technical Articles
DMO: optimizing system downtime is timeless…
Last Changed: Q4/2020
Increasing the DMO performance
Blogs:
– Optimizing DMO Performance
– DMO: Handling table comparison checksum errors
– Downtime minimization when upgrading BW systems
– Note 2350730 – Additional Info – SAP BW 7.50 SP04 Implementation
Since the availability of the DMO functionality within the SUM framework at lot of things happened under the hood as well. Beside the Availability of the SAP First Guidance – Migration BW on HANA using the DMO option in SUM (which is updated constantly …)
The latest Version of the SUM/DMO executables – https://support.sap.com/sltoolset
Especially Oracle RDBMS based Systems which covers the majority of our Customer base need to address with some Performance Improvements right from the beginning to ensure a stable and performant migration to BW on HANA despite it´s sizes or other challenges on the way.
Performance Optimization: Table Migration Durations
You can provide the Software Update Manager with the information about table migration durations from a previous DMO run. SUM uses this data to optimize the performance of subsequent DMO runs on the same system. Although SUM does consider the table sizes for the migration sequence, other factors can influence the migration duration. In other words, more criteria than just the table size have to be considered for the duration of a table migration.
The real durations are the best criteria, but they are only known after the migration of the tables.
ℹ this can improve the downtime up to 50%, due to the more “aggressive” table splitting after the first run and the results in the XML files. During a migration, SUM creates text files with the extension XML which contain the information about the migration duration for each migrated table. The files are created in directory SUM/abap/htdoc/ and are called (create them beforehand, if you want to use the setting from the beginning)
– MIGRATE_UT_DUR.XML for the uptime migration, and
– MIGRATE_DT_DUR.XML for the downtime migration
Copy the above-mentioned XML-files directly to the download folder. The creation of the specific file location in the file SAPup_add.par in directory SUM\abap\bin become obsolete with the current Versions of the SUM Versions 1.0/2.0 (September 2020).
➕ DMO: background on table split mechanism
whatever you will find out: You can trust the DMO optimization when it comes to the table splitting. the Algorithm behind is smarter than you think. Overruling this via the manual setup of the file EUCLONEDEFS_ADD.LST is not possible.
In the SAP First Guidance – Using the new DMO to migrate to BW on HANA is also mentioned, how to use the file SUM/abap/htdoc/UPGANA.XML to optimize the runtime further.
ℹ don´t forget the BW Housekeeping Task before you start the DMO procedure, and don´t underestimate the importance saving time and space! the Blog SAP BW-Performance Tuning through System Tables gives you a good overview about the “waste” you have collected in your BW System. Together with the manual table splitting option, you can take over the tables without content.
SAP First Guidance – BW Housekeeping and BW-PCA
Introducing the DMO Benchmark tool
➕ DMO: introducing the benchmarking tool
start the tool with the following URL – https://<host>:1129/lmsl/migtool/<SID>/doc/sluigui
Parallel Processes Configuration
http://server.wdf.sap.corp:1128/lmsl/sumabap/<SID>/set/procpar
Note 1616401 – Parallelism in the Upgrades, EhPs and Support Packages implementations
❗ Do not use more than 32 R3trans Processes and 24 SQL Processes as maximum for the Start. The R3load Processes can be increased/decreased as you have the resources available. Consider their Values divided by 2: e.g. 96 Export and 96 Import Processes on the AppServer will lead to 192 Processes. If you want to increase the migration performance, only the downtime processes are relevant. As the ABAP DDIC is the almost same size despite the DB size, SQL PROCESSES and PARALLEL PHASES can be even on the default values.
❗ For the configuration phase you can use higher values for the uptime/downtime R3load process to allow a higher split value of the tables. Before the downtime process is starting, use the much lower value for R3load to ensure that you always in a optimal range of CPU/RAM/Network usage. Increasing the R3load step by step (e.g. with the value 20) is more effective, than starting with a high value and reducing them. The little Monitoring tool called “nmon” (e.g. AIX, Linux64, etc.) can be very useful in monitoring Hardware and Network resources during the migration process.
– Phases behind DMO R3load parallel export/import during UPTIME and DOWNTIME to target HANA DB
– SUM: introduction to shadow system
ℹ
For the detailed monitoring of the HANA resources you can use the following SAP Notes and enable the HANA Configuration Check Tool
- Note 1943937 – Hardware Configuration Check Tool – Central Note
- Note 1991051 – Hardware Configuration Check Tool – Extended documentation for network test module
If you want to increase the Value for the ABAP Processes, you have to maintain the Instance Parameter rdisp/tm_max_no=2000 in the shadow Instance as in the primary Instance to prevent SAP Gateway errors like as follows. This also is related to the Value for the PARALLEL PHASES. *** ERROR => GwRqDpSendTo: DpLogon failed [gwdp.c 3580] *** WARNING => DpLogon: overflow of logon table [dpxxtool2.c 4047] *** LOG Q0P=> DpLogon, tm_overflow ( 200) [dpxxtool2.c 4050]
Create a graphical output after the DMO run
Once the files EUMIGRATEUTRUN.LOG and EUMIGRATEDTRUN.LOG are created, you cannot only improve the next DMO run, you can also use these files as input into a graphical representation of the DMO run. With the Release of SUM 1.0 SP14 the extended UI allows you to do realtime monitoring and detailed analysis of existing DMO run´s. See also the Document – SAP First Guidance – Using the new DMO to Migra… | SCN
Example for a DMO run explaining the different phases:
Oracle: Suppress Long-Running Phases
EU_CLONE_DT_SIZES/EU_CLONE_UT_SIZES
During the update with DMO, the following phases can be long-running:
– EU_CLONE_DT_SIZES
– EU_CLONE_UT_SIZES
In the course of these phases, the system updates the database statistics regarding the usage of space that the tables need on the database. The aim is a better distribution of the tables during the system cloning. Before you start the update, you have the option to suppress these long-running phases using the following procedure:
1. Log on to the host where the Oracle database instance is running. Use ora<dbsid> for UNIX system, or user <sapsid>adm for Windows.
2. Open a command prompt and execute the following commands:
brconnect -u / -c -f stats -t oradict_stats -p 8
brconnect -u / -c -f stats -t system_stats -p 8
brconnect -u / -c -f stats -o <schema_owner> -t all -f allsel,collect,space -p 8 -c 5 -force
brconnect -u / -c -f stats -t all -f monit -p 8
3. Add to file SAPup_add.par the following line:
/ORA/update_spacestat = 0
The file SAPup_add.par is located in the subdirectory SUM/abap/bin of the SUM-directory. If this file does not exist yet, create it.
❗ Especially Oracle based RDBMS (which is still the largest SAP customer implementation) need special attention for the DB statistics despite which version you are running. “An old statistic is a dead statistic”. Old could mean 10 seconds or an empty table as well. You can always see in transaction SM50, which table is busy and run an updated statistic with transaction DB20. This can help already a lot, but of course can be time consuming. so have a look to the following SAP Notes as well. Oracle is the RDBMS which need the most attention before you start the DMO process.
❗ Also manually created additional Indexes on the source database can lead to errors in the HANA import described in SAP Note 2007272.There is an Update for the current 7.42 R3load available – See SAP Note 2144285
ℹ
Don´t go for the SAP Notes title and don´t mix up with manual heterogeneous system copy recommendations. DMO is highly optimized is a way, a custom build migration script or monitor would never reach DMO performance and is not supported in this context anyway.
Note 936441 – Oracle settings for R3load based system copy
init.ora/spfile
filesystemio_options = setall
disk_asynch_io = true
log_buffer = 1048576
parallel_execution_message_size = 16384
parallel_threads_per_cpu = 1
parallel_max_servers = number of CPU’s * number of concurrent R3load processes * 2
processes = processes + parallel_max_servers
SQL*Net configuration
Increase SDU_SIZE in listener.ora to 64KB
Increase SDU_SIZE in tnsnames.ora to 64KB
Specify tcp.nodelay = yes in sqlnet.ora
Note 1045847 – ORACLE DIRECT PATH LOAD SUPPORT IN R3LOAD
Note 1741015 – Repartitioning of BW tables
Note 1413928 – Index corruption/wrong results after rebuild index ONLINE
Note 1635605 – CLIENT HANGS ON INSERT INTO TABLE WITH SECUREFILE LOB
Note 1813548 – Database Migration Option (DMO) of SUM
Note 1875778 – Performance Optimization for System Copy Procedures
Note 1918774 – Performance issues when running a SAP Installation / System Copy
Note 1967819 – R3load tools on HANA DDLHDB.TPL: create primary key after load
Note 1981281 – Oracle: Add. Information – Software Update Manager 1.0 SP12
Note 2008547 – Error during conversion of sub-partitioned tables
Note 2007272 – Identify Duplicated Records in Oracle tables
Note 2130908 – SUM tool error to connect HANA DB with target release 7.4 SP8 or higher
➕ so for the most of the systems this example of the file SUM/abap/bin/SAPup_add.par would increase the performance a lot.
*/clonepar/imp/procenv = HDB_MASSIMPORT=YES
*/clonepar/indexcreation = after_load
*/clonepar/clonedurations = <absolute_path>/MIGRATE_UT_DUR.LST,<absolute_path>/MIGRATE_DT_DUR.LST
/ORA/update_spacestat = 0
* retired values/now default values with the current R3load versions (Note 2118195) and the latest SUM Versions (see below).
SAP Kernel handling – always use the latest version of the R3* tools and LibDBSL
During the migration to SAP HANA DMO of SUM has to deal with 3 kernel versions. Those are in detail:
• Kernel currently used by the system for the source DB (e.g. 7.20 EXT for Oracle)
• Kernel for the target release and source DB (e.g. 740 for Oracle – used for shadow system)
• Kernel for the target release and SAP HANA
The kernel currently used by the system can usually be found in /sapmnt/<SID>/exe/…
The other two target kernel versions (for AnyDB and SAP HANA) can be found in the SUM directory.
At the beginning of the migration process those directories look like this:
• SUM/abap/exe < contains the target kernel for AnyDB
• SUM/abap/exe_2nd < contains the target kernel for SAP HANA
During downtime (phase MIG2NDDB_SWITCH) the directories will be switched. After the switch it looks like this:
• SUM/abap/exe (target kernel for AnyDB) has been moved to SUM/abap/exe_1st
• SUM/abap/exe_2nd (target kernel for SAP HANA) has been moved to SUM/abap/exe
As usual in SUM later on (phase KX_SWITCH) the kernel will be copied from SUM/abap/exe to the system
❗ together with the R3* tools you always exchange the LibDBSL as well for source and target DB. Currently for Kernel 7.42 (which is needed for SAP BW 7.40 SP08 and higher) these latest patches are needed:
Note 2054965 – R3load: TOC for logical table is incomplete in declustering mode after restart
Note 2124912 – R3load sporadically produces empty task files
Note 2118195 – R3load aborts during unicode conversion and declustering
Note 2144285 – R3load can produce duplicates at export if bulk fetch mode is active
Note 2130541 – SAP HANA: Deactivate fast data access in a non-Unicode system
Note 2144274 – R3load migration to SAP HANA database Revision 85.02 can lead to a disk full event
SAP tools (06.03.18) | DBA tools | LibDBSL | R3szchk | R3ldctl | R3load |
Oracle (ORA) | 740O11.033 | See SAP Kernel PL below | |||
IBM/UDB (DB6) | db6util_510 | See SAP Kernel PL below | |||
HANA (HDB) | SP12.24 | See SAP Kernel PL below | |||
tp/R3trans 7.49 | PL628 (01.03.19) | ||||
tp/R3trans 7.53 | PL400 (01.03.19) | ||||
SPAM/SAINT 7.xy/0071 | (21.09.18) | ||||
SAPHostAgent 7.21 | SP41 (18.02.19) | ||||
SUM 1.0 SP21 | PL07 (14.02.18) | SAP Note 2418924 | |||
SUM 1.0 SP22 | PL14 (25.02.19) | SAP Note 2472928 | |||
SUM 1.0 SP23 | PL03 (27.02.19) | SAP Note 2580442 | |||
SUM 2.0 SP01 | PL09 (01.08.18) | SAP Note 2472850 | |||
SUM 2.0 SP02 * | PL12 (21.02.19) | SAP Note 2529257 | |||
SUM 2.0 SP03 ** | PL10 (29.01.19) | SAP Note 2580453 | |||
SUM 2.0 SP04 *** | PL04 (27.02.19) | SAP Note 2644862 |
Attention: The Software Update Manager 2.0 SP01 is part the Software Logistics Toolset 1.0
SP stack 22 and exists in parallel to the Software Update Manager 1.0 SP21
* Attention: The Software Update Manager 2.0 SP02 is part the Software Logistics Toolset 1.0 SP stack 23 and exists in parallel to the Software Update Manager 1.0 SP22.
** Attention: The Software Update Manager 2.0 SP03 is part the Software Logistics Toolset 1.0 SP stack 24 and exists in parallel to the Software Update Manager 1.0 SP23.
*** Attention: The Software Update Manager 2.0 SP04 is part the Software Logistics Toolset 1.0 SP stack 25 and exists in parallel to the Software Update Manager 1.0 SP24
Which SUM/DMO Version for which scenario?
(Blog/Graphic by Boris Rubarth, SL tools)
Note 2223738 – Central Note – Software Update Manager 1.0 SP17 [lmt_008]
Note 2277058 – Central Note – Software Update Manager 1.0 SP18 [lmt_004]
Note 2328500 – Central Note – Software Update Manager 1.0 SP19 [lmt_005]
Note 2418924 – Central Note – Software Update Manager 1.0 SP21 [lmt_007]
Note 2472928 – Central Note – Software Update Manager 1.0 SP22 [lmt_004]
Note 2428168 – Central Note – Software Update Manager 2.0 SP00 [lmt_020]
Note 2472850 – Central Note – Software Update Manager 2.0 SP01 [lmt_021]
Note 2529257 – Central Note – Software Update Manager 2.0 SP02 [lmt_022]
One Word about the HANA Version
To keep the additional complexity to minimum, try to stick with the current HANA 1.0 SP12 Revisions, when running the DMO procedure. To find a list of the available SAP HANA Revisions have a look to the
SAP First Guidance – SAP BW on HANA – Edition 2017
Nevertheless, some of the mentioned HANA settings here are beneficial, despite of the used SAP HANA Version.
MERGE DELTA of <tablename> FORCE REBUILD;
Note 2112732 – Pool/RowEngine/MonitorView allocates large amount of memory
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'system') set ('expensive_statement', 'use_in_memory_tracing') = 'false';
Note 1912332 – SQL array DML operations fail with error code “-10709”
ALTER SYSTEM ALTER CONFIGURATION('indexserver.ini','SYSTEM') SET ('distribution', 'split_batch_commands') = 'off' WITH RECONFIGURE;
Note 2136595 – Dump DBIF_DSQL2_SQL_ERROR when doing a BW data load
dbs/hdb/connect_property = CONNECTTIMEOUT=0
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'system') set ('optimize_compression', 'min_hours_since_last_merge_of_part') = '0'
Note 2039810 – Table does not get merged automatically
Note 1930853 – HdbUserStore contains too many SAP HANA nodes
Note 2229248 – Long runtime in background job “RSUPGRCHECK” during SAP EHP upgrade
Note 2026343 – SP34: DMO/Housekeeping: Performance improvement of PSA Housekeeping tasks
ℹ with the release of SUM 1.0 SP14 and higher, there will also a new and functionality improved and consolidated UI5 for all SUM/DMO procedures (except Dual Stacks) be available.
See also the Blog – Upgrade was never been easier …
Roland Kramer, SAP Platform Architect for Intelligent Data & Analytics
@RolandKramer
“I have no special talent, I am only passionately curious.”
Roland,
Thank you for posting. This will be a great reference as new DMO features are released. Looking forward to updates in the future.
One question I do have regarding future DMO performance enhancements: When will ROWID table splitting be available in SUM/DMO with Oracle as the source RDBMS?
Hi,
I will add this additional possibility to the Blog as well now.
This is not jet added to the documentation as you overrule the DMO procedure and you have to take this with caution.
Best Regards Roland
Roland,
I received official word from DMO development that Oracle ROWID table splitting is native to SUM w/DMO 1.0 SP13.
We are about to start a new SUM w/DMO and will reply with results and lessons learned regarding ROWID table splitting.
Regards,
Scott
Thanks Ronald,
For all the optimization parameters to be considered for a DMO execution. We are in middle of BW on HANA production migration and have explored all the options discussed above.
Thansk for the insight and summarizing!
Alert!!
As of HANA Rev85 you need to use this parameter (/clonepar/indexcreation = before_load) to avoid crash of indexserver as indexing of big tables will be done before load but it will lead to no creation of primary index and secondary index - due to the bug in the SUM PL04 and PL10 .
Hi Devpriy Trivedi,
based on note 1813548 - Database Migration Option, it mention that this only happen with table size more than 2 billion row.
----------------< Update D028310 07/JAN/2015 >-----------------------
---------------------< D028310 05/AUG/2014 >-------------------------
HANA: ATTENTION: Potential out-of-memory error on the HANA indexserver
(Update 07.01.2015: This issue has been fixed with SAP HANA Revision 90. Before you start the update or upgrade with DMO, make sure that you have installed SAP HANA with revision 90 or higher.)
Symptom and Cause:
During the migration to the SAP HANA database, one or more tables with a table size of over 2 billion rows are part of the migration. If a table has not yet a primary key, this key has to be created during the migration. However, the primary key creation for large tables with more than 2 billion rows will be processed by the SAP HANA join engine instead of the OLAP engine of SAP HANA.This is due to a limitation in the OLAP engine.
But the join engine requires more memory if the primary key is created on an already filled table, because it has to hold all partitions in memory on one single node at a certain point of time. This may cause an out-of-memory-error in the SAP HANA indexserver. Since the default setting for HANA migration in DMO is to create the primary key after the load in order to speed up loading performance, this constellation is very likely to occur with large tables of the above mentioned size.
Solution:
a) If you can predict before you start the update or upgrade that this error might occur, you set in the file SAPup_add.par the parameter
/clonepar/indexcreation = before_load
b) If you encounter this error during the migration, you proceed as follows:
1. Set in the file SAPup_add.par the parameter "/clonepar/indexcreation = before_load"
2. Reset the current EUMIGRATE phase by renaming the current <SUM_DIR>/migrate_<name> directory
3. Repeat the phase with the ‘init’ option
Yeah Nicholas - but in our case we found that none of the tables have indexes.
Hi Devpriy Trivedi
May i know how do you find out all primary keys were not created do you resolve by not rerun DMO?
Thanks,
Nicholas Chang
Hello Nicholas -
Majorly from the migration_dt log files but we checked randomly tables which i was aware of via se16 - also i referred to db02 for missing indexes.
We re ran the migration again without using before_load parameter and to avoid indexserver crash we reduced the size of big tables which we learnt from our previous run.
Thanks
Dev
Hi
Thanks for the input. Just to clear my confusion, how do you know which indexes should created and which shouldn't?
As per the note below, it stated we can ignore those missing indexes.
2058283 - DDIC/DB consistency check: Unknown objects in ABAP/4 Dictionary
Do you still see missing indexes in dbacockpit for your second successful run?
Thanks,
Nicholas Chang
I am not sure but i think all the transparent tables should have the indexes - after successful migration there no missing indexes reported in dbacockpit.
Thanks
Dev
Hi Nicolas,
I´m afraid, but your section about the parameter /clonepar/indexcreation = before_load
cannot be found in the Note 1813548 - Database Migration Option (DMO) of SUM
The current HANA Version for SP08 is Rev. 85.02
Following the Recommendations from the SAP Notes /clonepar/indexcreation = after_load
makes more sense.
Note 1967819 - R3load tools on HANA DDLHDB.TPL: create primary key after load
Note 2130908 - SUM tool error to connect HANA DB with target release 7.4 SP8 or higher
Splitting the Table beforehand as stated in SAP Note - Note 1741015 - Repartitioning of BW tables
is the more useful approach here
Best Regards Roland
Hi Roland,
SAP has changed to solution from setting "/clonepar/indexcreation = before_load" to "Contact the SAP support" as the note was updated on 19/03/2015.
----------------< Update D023536 19/MAR/2015 >-----------------------
---------------------< D028310 05/AUG/2014 >-------------------------
HANA: ATTENTION: Potential out-of-memory error on the HANA indexserver
(Update 07.01.2015: This issue has been fixed with SAP HANA Revision 90. Before you start the update or upgrade with DMO, make sure that you have installed SAP HANA with revision 90 or higher.)
Symptom and Cause:
During the migration to the SAP HANA database, one or more tables with a table size of over 2 billion rows are part of the migration. If a table has not yet a primary key, this key has to be created during the migration. However, the primary key creation for large tables with more than 2 billion rows will be processed by the SAP HANA join engine instead of the OLAP engine of SAP HANA.This is due to a limitation in the OLAP engine.
But the join engine requires more memory if the primary key is created on an already filled table, because it has to hold all partitions in memory on one single node at a certain point of time. This may cause an out-of-memory-error in the SAP HANA indexserver. Since the default setting for HANA migration in DMO is to create the primary key after the load in order to speed up loading performance, this constellation is very likely to occur with large tables of the above mentioned size.
Solution:
Contact the SAP support.
This is good
Hello Roland, Which is the process or previous steps to migration from one central system (win+oracle) to Hana with DMO with Hana&Abap in one server?
Great work Roland. My one stop place for DMO.
A question though, I keep on hearing conflicting information on whether you have to be OS/DB (TADM70) certified or not to perform a DMO. Able to give us a official answer ?
Cheers,
Amerjit
Hi,
See the Statement from the SAP - Note 1813548 - Database Migration Option (DMO) of SUM 1.0 up to SP13
>Organizational remark
We recommend to involve a certified consultant "OS/DB migration for SAP NetWeaver", but this is not required.<
I think personally it is more important to know the right things, rather to hide behind a certification which is a decade old. SAP HANA is not that old.
Best Regards Roland
Hi Roland,
As you mentioned: * From SUM/DMO 1.0 SP13 the extension of the files changed from .LST to .XML. But from upgrade guide (for SUM 1.0 SP13), i can see we still need to use MIGRATE_DT_DUR.LST. Can you help? 🙂
BR,
Johnny
Hi
the official guide is changed in the meantime. Please check for the latest version.
best regards Roland
Hi Roland,
Indeed~~
Thanks
John
Hi Roland,
One more question, In case the MIGRATE_DT_DUR.LST/XML files were generated by SUM SP12 in last POC, and in this POC, we used SUM SP13, i think we should again use LST file instead of XML file, right?
Cheers,
Johnny
Hi Roland,
Also from the latest DMO Guide (SUM_SP13_DMO_RTC_15), it mentions only reuse error free MIGRATE_UT_DUR.XML & MIGRATE_DT_DUR.XML. My question comes in:
i) can we still reuse above XMLs files, for eg, we hit activation error during pre-processing/downtime phase for the first run?
ii) Is it necessary to set below parameter as it not specify in the guide, where the guide asks to put in "download" directory.
/clonepar/clonedurations=<absolute_path>/MIGRATE_UT_DUR.XML, <absolute_path>/MIGRATE_DT_DUR.XML
Hope to hear from you soon.
Thanks!
Nicholas
i) it is not an error log and it is completed when the tables are processed.
ii) what is wrong in specifying the absolute path for the XML files where ever they accessible ?
best regards Roland
Hi @RolandKramer,
Thanks for the reply.
Stick back to my 1st question: I mean can we still re-use the MIGRATE_UT_DUR.XML & MIGRATE_DT_DUR.XML for the next DMO run, if we encountered any error (eg: activation error) during the first DMO run? I asked this because in the latest DMO Guide (SUM_SP13_DMO_RTC_15), it mentions only reuse error free MIGRATE_UT_DUR.XML & MIGRATE_DT_DUR.XML
Apologize for the confusion earlier.
Thanks,
Nicholas
Hi,
I updated the Blog above for clarification.
Best Regards Roland
Hi Roland,
Blog mentions that "SUM uses this data to optimize the performance of subsequent DMO runs on the same system. "
By same system , do we mean by same <SID> which was upgraded earlier or similar system. For SUM SP13 , it can be used for similar system but customer is using the
SP12 and I could not get a clear statement for SUM SP12.
Thanks.
BR,
Gaurav
Hi,
It is related within the SUM SP.
Of course the XML files can be used for other systems within the same system landscape as well.
Best Regards Roland
Hai,
Thanks for posting. This will be a great reference.
warm regards sandeep
Hi Roland, and thank you very much for your blog. Regarding database compression, do you have experience about how beneficial or harmful could be the DMO downtime process with or without database compression? Thanks
Regards, Omar
Hi,
from the DMO process nothing will harm, if the data is not compressed. You will see a much better compression rate to SAP HANA when the source data is uncompressed.
Best Regards Roland
Hallo Roland
Do Linux OS tuning would help DMO optimization?Linux tuning like tcp buffer,swap,io scheduler,storage block device,swappiness. What about changing from default page (4kb) HUGE PAGE ? What do you suggest regarding Ethernet card ?
regards,
biral
Hallo Roland,
Hello Stephen,
We tried using UPGANA.xml file from previous DMO runs for ECC EHP6 upgrade and migration to EHP7 on HANA. SUM we used was SUM 1.0 SP17 PL8.
When we provided UPGANA.xml in download directory and entered path in sapup_add.par file in /usr/sap/<SID>/SUM/abap/bin/ however it was not accepted by SUM tool.
We received below error in phase – MAIN_SHDIMP/SUBMOD_MIG_PREPARE/EU_CLONE_MIG_UT_PRP as follows –
Illegal top level tag analysis in ‘/usr/sap/<SID>/Download_DIR/UPGANA.xml’ – expected ‘Clone durations’ .
Can you please help on this?
Thanks,
Ambarish
Hi,
As stated in the Document - SAP First Guidance – Using the new DMO to migrate to BW on HANA
the UPGANA.xml files have to be copied to the download folder instead
Best Regards Roland
Hi,
Files were copied to the download folder only however we received same "Illegal top level tag analysis in ‘/usr/sap/<SID>/Download_DIR/UPGANA.xml’ – expected ‘Clone durations’ ." for each run. We tried reusing UPGANA.xml from 3-4 mock runs and everytime same error.
Also note that we reused these files for same system with same SID. MIGRATE_UT_DUR.XML, MIGRATE_DT_DUR.XML were taken successfully by SUM tool.
Problem as mentioned above occurred for "UPGANA.XML" file only.
Could difference in DB size or table growth be the reason behind this?
Thanks,
Ambarish
Hello Amarish
As I can sse you are not using the standard notation for the download Directory
/usr/sap/<SID>/download
SAP First Guidance – Migration BW on HANA using the DMO option in SUM
Do you have the correct access rights to the DIR_PUT DIrectory?
Note 2383097 - SUM: Phase CREATE_UPGEVAL failed with UPDATE_ERROR (UPDATE_ERROR): EXCEPTION UPDATE_ERROR RAISED
Best Regards Roland
Hello Roland,
Can the Table duration files generated during the Benchmarkingrun can be used in Productive run for Optimization.
Regards,
Sree
Hello
Of course. This is the reason why I made for - Chapter 4.3.4
https://service.sap.com/~sapidb/011000358700000950402013E
Best Regards Roland
Hallo Roland,
We recently did migration of sap ewm system from Windows/SQL to windows/Hana using sum dmo 1.0 sp20.
There were no errors during downtime, and it took 8 hours to finish downtime phase.
When we compare time mentioned in upgana.xml it doesn't match with actual time. Upgana shows only 2 hours of downtime phase.
Eumigratedtrun.log does show correct time values.
Thanks,
Ambarish
Dear Roland,
I'd like to use the duration files of the benchmark run for the productive DMO run as i only have a two system landscape. The Benchmark run created the file MIGRATE_DUR.XML. So do i have to rename it to MIGRATE_DUR_DT.XML or does DMO understand when i set the paramater like
/clonepar/clonedurations = <absolute_path>/MIGRATE_DUR.XML ??
Hi Roland,
In which logfile can I check that the duration files are actually being used?
Thanks,
Sander
Hello Sander
Believe it ...
See also - https://help.sap.com/doc/38301960cfe4484587f9cedb8c6a740f/dmosum10.22/en-US/dmo_of_sum1_to_hana.pdf => Chapter 2.8
and - SAP First Guidance – Using the new DMO to migrate to BW on HANA => Page 20/21
Best Regards Roland
HI Roland,
I now put the DUR files in ../<the.downloaddir>/*DUR.XML
And to be sure also put them in the htdoc location and updated the SAPup_add.par with the htdoc location
In the logfile ASKDOWNLOAD.LOG both files are now referenced, so this gives me confidence that they will be used.
1 ETQ399 Determine actions for scanned files.
2 ETQ399 Adding action 'USE_CLONE_DURATIONS' to '\install\Download\MIGRATE_DT_DUR.XML'.
2 ETQ399 Adding action 'USE_CLONE_DURATIONS' to '\install\Download\MIGRATE_UT_DUR.XML'.
Gr
Sander
It is no longer necessary to add an entry into SAPup_add.par, can you update the blog to reflect this ?
Hi
Actually all parameters I mentioned are retired and marked with an asterix ...
Best Regards Roland
Hi Roland,
This was very helpful to me, so thanks a lot for listing all these different topics.
One slight remark though. I see a lot of references (in other DMO posts as well) to note 936441 - Oracle Settings for R3load based system copy but that note dates back to 2012.
Wouldn't it be better to either have that note updated or refer to a different note.
For example the SDU settings... As of oracle 11.2.0.2 this can be set to a value of 65535
It's also refering to R3ldctl 7.00, Patch Level 1 - this seems quite outdated as well.
Ragards,
Dieter
Hello Dieter,
Thanks for letting me know ...
In Fact I really never looked into Oracle sind HANA is almost everywhere.
Even Oracle 13 has some Improvements when it comes to the DMO procedure ...
Best Regards Roland
No problem. Thanks for updatin!
I managed to get the runtime from 4h45min to 4h10 minutes. I believe I can get it down even more, but I’m struggling to find ways to tune oracle or HANA even more.
I’ve increased the plan_cache_size to 6Gb to avoid cache evictions and lower the cpu usage on HANA level
On oracle I believe the system benefits from a big cache and pga limits, but I’m running out of ideas.
Do you have any other tricks you might think of?
Regards,
D.
Hello Dieter
Last Year I did a System Review BW 7.50 on Oracle 12 connected to a BWA 7.20 Actually this is a really weird configuration, however it works if all the mayor settings in Oracle and BW are done.
Performance Review
See if the finding are suiting your situation
best regards Roland
Thanks a lot!
I'll give it a look
D.
Hi Roland,
thank you for the blog and the tips!
Hint for the readers: for using the duration files, it is meanwhile sufficient to put the files into the download folder, no need to use that parameter file (pointing to the file location) that was formerly listed in the guide but is deprecated.
Kind regards,
Boris