Skip to Content

Increasing the DMO performance

Blogs:

– Optimizing DMO Performance
– DMO: Handling table comparison checksum errors
– Downtime minimization when upgrading BW systems
– Note 2350730 – Additional Info – SAP BW 7.50 SP04 Implementation

Since the availability of the DMO functionality within the SUM framework at lot of things happened under the hood as well. Beside the Availability of the SAP First Guidance – Migration BW on HANA using the DMO option in SUM (which is updated constantly …)


The latest Version of the SUM/DMO executables – https://support.sap.com/sltoolset

Especially Oracle RDBMS based Systems which covers the majority of our Customer base need to address with some Performance Improvements right from the beginning to ensure a stable and performant migration to BW on HANA despite it´s sizes or other challenges on the way.


 Performance Optimization: Table Migration Durations

You can provide the Software Update Manager with the information about table migration durations from a previous DMO run. SUM uses this data to optimize the performance of subsequent DMO runs on the same system. Although SUM does consider the table sizes for the migration sequence, other factors can influence the migration duration. In other words, more criteria than just the table size have to be considered for the duration of a table migration.

The real durations are the best criteria, but they are only known after the migration of the tables.

this can improve the downtime up to 50%, due to the more “aggressive” table splitting after the first run and the results in the XML files. During a migration, SUM creates text files with the extension XML which contain the information about the migration duration for each migrated table. The files are created in directory SUM/abap/htdoc/ and are called (create them beforehand, if you want to use the setting from the beginning)

MIGRATE_UT_DUR.XML for the uptime migration, and
MIGRATE_DT_DUR.XML  for the downtime migration

To provide SUM with these migration durations to optimize the next DMO run, proceed as follows:

1. Copy the above-mentioned XML-files to a different directory (such as the download folder) to prevent them from being overwritten.
2. Create the file SAPup_add.par in directory SUM\abap\bin and add to this file the following parameter as content:
/clonepar/clonedurations=<absolute_path>/MIGRATE_UT_DUR.XML, <absolute_path>/MIGRATE_DT_DUR.XML
(<absolute_path> is here the placeholder for the directory in which you copied the XML-files as described in step 1.)

➕ DMO: background on table split mechanism

whatever you will find out: You can trust the DMO optimization when it comes to the table splitting. the Algorithm behind is smarter than you think. Overruling this via the manual setup of the file EUCLONEDEFS_ADD.LST is technical possible on request, but not feasible due to the manual overhead. In the SAP First Guidance – Migration BW on HANA using the DMO option in SUM is also mentioned, how to use the file SUM/abap/htdoc/UPGANA.XML to optimize the runtime further.

ℹ don´t forget the BW Housekeeping Task before you start the DMO procedure, and don´t underestimate the importance saving time and space! the Blog SAP BW-Performance Tuning through System Tables gives you a good overview about the “waste” you have collected in your BW System. Together with the manual table splitting option, you can take over the tables without content.
SAP First Guidance – BW Housekeeping and BW-PCA


Introducing the DMO Benchmark tool

➕ DMO: introducing the benchmarking tool

start the tool with the following URL – https://<host>:1129/lmsl/migtool/<SID>/doc/sluigui

Parallel Processes Configuration

http://server.wdf.sap.corp:1128/lmsl/sumabap/<SID>/set/procpar

DMO_processes.JPG

Note 1616401 – Parallelism in the Upgrades, EhPs and Support Packages implementations

Do not use more than 32 R3trans Processes and 24 SQL Processes as maximum for the Start. The R3load Processes can be increased/decreased as you have the resources available. Consider their Values divided by 2: e.g. 96 Export and 96 Import Processes on the AppServer will lead to 192 Processes. If you want to increase the migration performance, only the downtime processes are relevant. As the ABAP DDIC is the almost same size despite the DB size, SQL PROCESSES and PARALLEL PHASES can be even on the default values.

❗ For the configuration phase you can use higher values for the uptime/downtime R3load process to allow a higher split value of the tables. Before the downtime process is starting, use the much lower value for R3load to ensure that you always in a optimal range of CPU/RAM/Network usage. Increasing the R3load step by step (e.g. with the value 20) is more effective, than starting with a high value and reducing them. The little Monitoring tool called “nmon” (e.g. AIX, Linux64, etc.) can be very useful in monitoring Hardware and Network resources during the migration process.

– Phases behind DMO R3load parallel export/import during UPTIME and DOWNTIME to target HANA DB
– SUM: introduction to shadow system


For the detailed monitoring of the HANA resources you can use the following SAP Notes and enable the HANA Configuration Check Tool

If you want to increase the Value for the ABAP Processes, you have to maintain the Instance Parameter 
rdisp/tm_max_no=2000 in the shadow Instance as in the primary Instance to prevent SAP Gateway errors like as follows. 
This also is related to the Value for the PARALLEL PHASES.

 *** ERROR => GwRqDpSendTo: DpLogon failed [gwdp.c       3580]
 *** WARNING => DpLogon: overflow of logon table [dpxxtool2.c  4047]
 *** LOG Q0P=> DpLogon, tm_overflow ( 200) [dpxxtool2.c  4050]

Create a graphical output after the DMO run

Once the files EUMIGRATEUTRUN.LOG and EUMIGRATEDTRUN.LOG are created, you cannot only improve the next DMO run, you can also use these files as input into a graphical representation of the DMO run. With the Release of SUM 1.0 SP14 the extended UI allows you to do realtime monitoring and detailed analysis of existing DMO run´s. See also the Document – SAP First Guidance – Using the new DMO to Migra… | SCN

Example for a DMO run explaining the different phases:

DMO_Graph_SP13.JPG


Oracle: Suppress Long-Running Phases

EU_CLONE_DT_SIZES/EU_CLONE_UT_SIZES

During the update with DMO, the following phases can be long-running:
EU_CLONE_DT_SIZES
EU_CLONE_UT_SIZES

In the course of these phases, the system updates the database statistics regarding the usage of space that the tables need on the database. The aim is a better distribution of the tables during the system cloning. Before you start the update, you have the option to suppress these long-running phases using the following procedure:

1. Log on to the host where the Oracle database instance is running. Use ora<dbsid> for UNIX system, or user <sapsid>adm for Windows.

2. Open a command prompt and execute the following commands:

brconnect -u / -c -f stats -t oradict_stats -p 8
brconnect -u / -c -f stats -t system_stats -p 8
brconnect -u / -c -f stats -o <schema_owner> -t all -f allsel,collect,space -p 8
brconnect -u / -c -f stats -t all -f monit -p 8

3. Add to file SAPup_add.par the following line: /ORA/update_spacestat = 0
The file SAPup_add.par is located in the subdirectory SUM/abap/bin of the SUM-directory. If this file does not exist yet, create it.

❗ Especially Oracle based RDBMS (which is still the largest SAP customer implementation) need special attention for the DB statistics despite which version you are running. “An old statistic is a dead statistic”. Old could mean 10 seconds or an empty table as well. You can always see in transaction SM50, which table is busy and run an updated statistic with transaction DB20. This can help already a lot, but of course can be time consuming. so have a look to the following SAP Notes as well. Oracle is the RDBMS which need the most attention before you start the DMO process.

❗ Also manually created additional Indexes on the source database can lead to errors in the HANA import described in SAP Note 2007272.There is an Update for the current 7.42 R3load available – See SAP Note 2144285

Don´t go for the SAP Notes title and don´t mix up with manual heterogeneous system copy recommendations. DMO is highly optimized is a way, a custom build migration script or monitor would never reach DMO performance and is not supported in this context anyway.

Note 936441 – Oracle settings for R3load based system copy

init.ora/spfile

filesystemio_options              =  setall
disk_asynch_io                    =  true
log_buffer                        =  1048576           
parallel_execution_message_size   =  16384
parallel_threads_per_cpu          =  1
parallel_max_servers              =  number of CPU’s * number of concurrent R3load processes * 2
processes                         =  processes + parallel_max_servers
SQL*Net configuration
Increase SDU_SIZE in listener.ora to 32KB
Increase SDU_SIZE in tnsnames.ora to 32KB

Specify tcp.nodelay = yes in sqlnet.ora

Note 1045847 – ORACLE DIRECT PATH LOAD SUPPORT IN R3LOAD
Note 1741015 – Repartitioning of BW tables
Note 1413928 – Index corruption/wrong results after rebuild index ONLINE
Note 1635605 – CLIENT HANGS ON INSERT INTO TABLE WITH SECUREFILE LOB
Note 1813548 – Database Migration Option (DMO) of SUM
Note 1875778 – Performance Optimization for System Copy Procedures
Note 1918774 – Performance issues when running a SAP Installation / System Copy
Note 1967819 – R3load tools on HANA DDLHDB.TPL: create primary key after load
Note 1981281 – Oracle: Add. Information – Software Update Manager 1.0 SP12
Note 2008547 – Error during conversion of sub-partitioned tables
Note 2007272 – Identify Duplicated Records in Oracle tables
Note 2130908 – SUM tool error to connect HANA DB with target release 7.4 SP8 or higher


➕   so for the most of the systems this example of the file SUM/abap/bin/SAPup_add.par would increase the performance a lot.

*/clonepar/imp/procenv = HDB_MASSIMPORT=YES
*/clonepar/indexcreation = after_load
*/clonepar/clonedurations = <absolute_path>/MIGRATE_UT_DUR.LST,<absolute_path>/MIGRATE_DT_DUR.LST
/ORA/update_spacestat = 0

* retired values/now default values with the current R3load versions (Note 2118195) and the latest SUM Versions (see below).


SAP Kernel handling – always use the latest version of the R3* tools and LibDBSL

During the migration to SAP HANA DMO of SUM has to deal with 3 kernel versions. Those are in detail:

Kernel currently used by the system for the source DB (e.g. 7.20 EXT for Oracle)
Kernel for the target release and source DB (e.g. 740 for Oracle – used for shadow system)
Kernel for the target release and SAP HANA
The kernel currently used by the system can usually be found in /sapmnt/<SID>/exe/…
The other two target kernel versions (for AnyDB and SAP HANA) can be found in the SUM directory.
At the beginning of the migration process those directories look like this:
SUM/abap/exe < contains the target kernel for AnyDB
SUM/abap/exe_2nd < contains the target kernel for SAP HANA

During downtime (phase MIG2NDDB_SWITCH) the directories will be switched. After the switch it looks like this:

SUM/abap/exe (target kernel for AnyDB) has been moved to SUM/abap/exe_1st
SUM/abap/exe_2nd (target kernel for SAP HANA) has been moved to SUM/abap/exe
  As usual in SUM later on (phase KX_SWITCH) the kernel will be copied from SUM/abap/exe to the system

❗ together with the R3* tools you always exchange the LibDBSL as well for source and target DB. Currently for Kernel 7.42 (which is needed for SAP BW 7.40 SP08 and higher) these latest patches are needed:

Note 2054965 – R3load: TOC for logical table is incomplete in declustering mode after restart
Note 2124912 – R3load sporadically produces empty task files
Note 2118195 – R3load aborts during unicode conversion and declustering
Note 2144285 – R3load can produce duplicates at export if bulk fetch mode is active
Note 2130541 – SAP HANA: Deactivate fast data access in a non-Unicode system
Note 2144274 – R3load migration to SAP HANA database Revision 85.02 can lead to a disk full event


SAP tools (09.02.2017) DBA tools LibDBSL R3szchk R3ldctl R3load
Oracle (ORA) 740O11.029 411 410 410 410
IBM/UDB (DB6) 310 411 410 410 410
HANA (HDB) SP12.06 411 410 410 410
tp 7.45 410
R3trans 7.45 410
SAP Kernel 7.45 is needed for DMO to 7.50 (UC only) – PL410
SAPHostAgent 7.23 SP22 (27.01.17)
SUM 1.0 SP19 PL00 (09.02.17) SAP Note 2328500
SUM 1.0 SP18 PL10 (02.02.17) SAP Note 2277058
SUM 1.0 SP17 PL12 (02.02.17) SAP Note 2223738


Note 2197897 – Central Note – Software Update Manager 1.0 SP16 [lmt_007]
* This Version and higher works for all Releases again.

Note 2223738 – Central Note – Software Update Manager 1.0 SP17 [lmt_008]
Note 2277058 – Central Note – Software Update Manager 1.0 SP18 [lmt_004]
Note 2328500 – Central Note – Software Update Manager 1.0 SP19 [lmt_005]


One Word about the HANA Version

To keep the additional complexity to minimum, try to stick with the current HANA 1.0 SP10 Revisions, when running the DMO procedure. To find a list of the available SAP HANA Revisions have a look to the 

SAP First Guidance – SAP BW on SAP HANA Installation/Systemcopy/HANA Revisions
Never the less some of the mentioned HANA settings here are beneficial, despite of the used SAP HANA Version.

MERGE DELTA of <tablename> FORCE REBUILD;

Note 2112732 – Pool/RowEngine/MonitorView allocates large amount of memory

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'system') set ('expensive_statement', 'use_in_memory_tracing') = 'false';

Note 1912332 – SQL array DML operations fail with error code “-10709”

ALTER SYSTEM ALTER CONFIGURATION('indexserver.ini','SYSTEM') SET ('distribution', 'split_batch_commands') = 'off' WITH RECONFIGURE;

Note 2136595 – Dump DBIF_DSQL2_SQL_ERROR when doing a BW data load

dbs/hdb/connect_property = CONNECTTIMEOUT=0

Note 2105761 – High memory consumption by RANGE-partitioned column store tables due to missing optimize compression

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'system') set ('optimize_compression', 'min_hours_since_last_merge_of_part') = '0'

Note 2039810 – Table does not get merged automatically
Note 1930853 – HdbUserStore contains too many SAP HANA nodes
Note 2229248 – Long runtime in background job “RSUPGRCHECK” during SAP EHP upgrade
Note 2026343 – SP34: DMO/Housekeeping: Performance improvement of PSA Housekeeping tasks


with the release of SUM 1.0 SP14 and higher, there will also a new and functionality improved and consolidated UI5 for all SUM/DMO procedures (except Dualstacks) be available.

See also the Blog –
Upgrade was never been easier …

newDMO_UI5.JPG


Roland Kramer, PM EDW (BW/HANA/IQ), SAP SE
@RolandKramer

To report this post you need to login first.

38 Comments

You must be Logged on to comment or reply to a post.

  1. Scott Groth

    Roland,

    Thank you for posting. This will be a great reference as new DMO features are released. Looking forward to updates in the future.

    One question I do have regarding future DMO performance enhancements: When will ROWID table splitting be available in SUM/DMO with Oracle as the source RDBMS?

    (0) 
    1. Roland Kramer Post author

      Hi,

      I will add this additional possibility to the Blog as well now.

      This is not jet added to the documentation as you overrule the DMO procedure and you have to take this with caution.

      Best Regards Roland

      (0) 
      1. Scott Groth

        Roland,

        I received official word from DMO development that Oracle ROWID table splitting is native to SUM w/DMO 1.0 SP13.

        We are about to start a new SUM w/DMO and will reply with results and lessons learned regarding ROWID table splitting.

        Regards,

        Scott

        (0) 
  2. Srinivas Kakarla

    Thanks Ronald,

    For all the optimization parameters to be considered for a DMO execution. We are in middle of BW on HANA production migration and have explored all the options discussed above.

    (0) 
  3. Devpriy Trivedi

    Alert!!


    As of HANA Rev85 you need to use this parameter (/clonepar/indexcreation = before_load) to avoid crash of indexserver as indexing of big tables will be done before load but it will lead to no creation of primary index and secondary index – due to the bug in the SUM PL04 and PL10 .

    (0) 
    1. Nicholas Chang

      Hi Devpriy Trivedi,

      based on note 1813548 – Database Migration Option, it mention that this only happen with table size more than 2 billion row.

      —————-< Update D028310 07/JAN/2015 >———————–

      ———————< D028310 05/AUG/2014 >————————-

      HANA: ATTENTION: Potential out-of-memory error on the HANA indexserver

      (Update 07.01.2015: This issue has been fixed with SAP HANA Revision 90. Before you start the update or upgrade with DMO, make sure that you have installed SAP HANA with revision 90 or higher.)

      Symptom and Cause:

      During the migration to the SAP HANA database, one or more tables with a table size of over 2 billion rows are part of the migration. If a table has not yet a primary key, this key has to be created during the migration. However, the primary key creation for large tables with more than 2 billion rows will be processed by the SAP HANA join engine instead of the OLAP engine of SAP HANA.This is due to a limitation in the OLAP engine.

      But the join engine requires more memory if the primary key is created on an already filled table, because it has to hold all partitions in memory on one single node at a certain point of time. This may cause an out-of-memory-error in the SAP HANA indexserver. Since the default setting for HANA migration in DMO is to create the primary key after the load in order to speed up loading performance, this constellation is very likely to occur with large tables of the above mentioned size.

      Solution:

      a) If you can predict before you start the update or upgrade that this error might occur, you set in the file SAPup_add.par the parameter

      /clonepar/indexcreation = before_load

      b) If you encounter this error during the migration, you proceed as follows:

      1.    Set in the file SAPup_add.par the parameter “/clonepar/indexcreation = before_load”

      2.    Reset the current EUMIGRATE phase by renaming the current <SUM_DIR>/migrate_<name> directory

      3.    Repeat the phase with the ‘init’ option

      (0) 
          1. Devpriy Trivedi

            Hello Nicholas – 

            Majorly from the migration_dt log files but we checked randomly tables which i was aware of via se16 – also i referred to db02 for missing indexes.

            We re ran the migration again without using before_load parameter and to avoid indexserver crash we reduced the size of big tables which we learnt from our previous run.

            Thanks

            Dev

            (0) 
              1. Devpriy Trivedi

                I am not sure but i think all the transparent tables should have the indexes – after successful migration there no missing indexes reported in dbacockpit.

                Thanks

                Dev

                (0) 
      1. Roland Kramer Post author

        Hi Nicolas,

        I´m afraid, but your section about the parameter /clonepar/indexcreation = before_load
        cannot be found in the Note 1813548 – Database Migration Option (DMO) of SUM


        The current HANA Version for SP08 is Rev. 85.02


        Following the Recommendations from the SAP Notes /clonepar/indexcreation = after_load

        makes more sense.

        Note 1967819 – R3load tools on HANA DDLHDB.TPL: create primary key after load

        Note 2130908 – SUM tool error to connect HANA DB with target release 7.4 SP8 or higher

        Splitting the Table beforehand as stated in SAP Note – Note 1741015 – Repartitioning of BW tables

        is the more useful approach here

        Best Regards Roland

        (0) 
        1. Nicholas Chang

          Hi Roland,

          SAP has changed to solution from setting “/clonepar/indexcreation = before_load” to “Contact the SAP support” as the note was updated on 19/03/2015.

          —————-< Update D023536 19/MAR/2015 >———————–

          ———————< D028310 05/AUG/2014 >————————-

          HANA: ATTENTION: Potential out-of-memory error on the HANA indexserver

          (Update 07.01.2015: This issue has been fixed with SAP HANA Revision 90. Before you start the update or upgrade with DMO, make sure that you have installed SAP HANA with revision 90 or higher.)

          Symptom and Cause:
          During the migration to the SAP HANA database, one or more tables with a table size of over 2 billion rows are part of the migration. If a table has not yet a primary key, this key has to be created during the migration. However, the primary key creation for large tables with more than 2 billion rows will be processed by the SAP HANA join engine instead of the OLAP engine of SAP HANA.This is due to a limitation in the OLAP engine.
          But the join engine requires more memory if the primary key is created on an already filled table, because it has to hold all partitions in memory on one single node at a certain point of time. This may cause an out-of-memory-error in the SAP HANA indexserver. Since the default setting for HANA migration in DMO is to create the primary key after the load in order to speed up loading performance, this constellation is very likely to occur with large tables of the above mentioned size.

          Solution:
          Contact the SAP support.

          (0) 
  4. ruben torres

    Hello Roland, Which is the process or previous steps to migration from one central system (win+oracle) to Hana with DMO with Hana&Abap in one server?

    (0) 
  5. Amerjit CHAHAL

    Great work Roland. My one stop place for DMO.

    A question though, I keep on hearing conflicting information on whether you have to be OS/DB (TADM70) certified or not to perform a DMO. Able to give us a official answer ?

    Cheers,

    Amerjit

    (0) 
    1. Roland Kramer Post author

      Hi,

      See the Statement from the SAP – Note 1813548 – Database Migration Option (DMO) of SUM 1.0 up to SP13

      >Organizational remark

      We recommend to involve a certified consultant “OS/DB migration for SAP NetWeaver”, but this is not required.<

      I think personally it is more important to know the right things, rather to hide behind a certification which is a decade old. SAP HANA is not that old.

      Best Regards Roland

      (1) 
  6. Johnny Chen

    Hi Roland,

    As you mentioned: * From SUM/DMO 1.0 SP13 the extension of the files changed from .LST to .XML. But from upgrade guide (for SUM 1.0 SP13), i can see we still need to use MIGRATE_DT_DUR.LST. Can you help? 🙂

    BR,

    Johnny

    (0) 
      1. Johnny Chen

        Hi Roland,

        One more question, In case the MIGRATE_DT_DUR.LST/XML files were generated by SUM SP12 in last POC, and in this POC, we used SUM SP13, i think we should again use LST file instead of XML file, right?

        Cheers,

        Johnny

        (0) 
    1. Nicholas Chang

      Hi Roland,

      Also from the latest DMO Guide (SUM_SP13_DMO_RTC_15), it mentions only reuse error free MIGRATE_UT_DUR.XML & MIGRATE_DT_DUR.XML. My question comes in:

      i) can we still reuse above XMLs files, for eg, we hit activation error during pre-processing/downtime phase for the first run?

      ii) Is it necessary to set below parameter as it not specify in the guide, where the guide asks to put in “download” directory.

      /clonepar/clonedurations=<absolute_path>/MIGRATE_UT_DUR.XML, <absolute_path>/MIGRATE_DT_DUR.XML


      Hope to hear from you soon.


      Thanks!

      Nicholas

      (0) 
      1. Roland Kramer Post author

        i) it is not an error log and it is completed when the tables are processed.

        ii) what is wrong in specifying the absolute path for the XML files where ever they accessible ?

        best regards Roland

        (0) 
        1. Nicholas Chang

          Hi @RolandKramer,

          Thanks for the reply.

          Stick back to my 1st question: I mean can we still re-use the MIGRATE_UT_DUR.XML & MIGRATE_DT_DUR.XML for the next DMO run, if we encountered any error (eg: activation error) during the first DMO run? I asked this because in the latest DMO Guide (SUM_SP13_DMO_RTC_15), it mentions only reuse error free MIGRATE_UT_DUR.XML & MIGRATE_DT_DUR.XML


          Apologize for the confusion earlier.

          Thanks,

          Nicholas

          (0) 
  7. Gaurav Kumar Pandey

    Hi Roland,

    Blog mentions that “SUM uses this data to optimize the performance of subsequent DMO runs on the same system.

    By same system , do we mean by same <SID> which was upgraded earlier or similar system. For SUM SP13 , it can be used for similar system but customer is using the
    SP12 and I could not get a clear statement for SUM SP12.

    Thanks.

    BR,
    Gaurav

    (0) 
    1. Roland Kramer Post author

      Hi,

      It is related within the SUM SP.

      Of course the XML files can be used for other systems within the same system landscape as well.

      Best Regards Roland

      (0) 
  8. Omar Solis

    Hi Roland, and thank you very much for your blog. Regarding database compression, do you have experience about how beneficial or harmful could be the DMO downtime process with or without database compression? Thanks

    Regards, Omar

    (0) 
    1. Roland Kramer Post author

      Hi,

      from the DMO process nothing will harm, if the data is not compressed. You will see a much better compression rate to SAP HANA when the source data is uncompressed.

      Best Regards Roland

      (0) 
  9. Biral Gajjar

    Hallo Roland
    Do Linux OS tuning would help DMO  optimization?Linux tuning like tcp buffer,swap,io scheduler,storage block device,swappiness. What about changing from default page (4kb) HUGE PAGE ? What do you suggest regarding Ethernet card  ?
    regards,

    biral

    (0) 
  10. Ambarish Satarkar

    Hallo Roland,

    Hello Stephen,

    We tried using UPGANA.xml file from previous DMO runs for ECC EHP6 upgrade and migration to EHP7 on HANA. SUM we used was SUM 1.0 SP17 PL8.

    When we provided UPGANA.xml in download directory and entered path in sapup_add.par file in /usr/sap/<SID>/SUM/abap/bin/ however it was not accepted by SUM tool.

    We received below error in phase – MAIN_SHDIMP/SUBMOD_MIG_PREPARE/EU_CLONE_MIG_UT_PRP as follows –

    Illegal top level tag analysis in ‘/usr/sap/<SID>/Download_DIR/UPGANA.xml’ – expected ‘Clone durations’ .

    Can you please help on this?

    Thanks,

    Ambarish

     

    (0) 
      1. Ambarish Satarkar

        Hi,

        Files were copied to the download folder only however we received same  “Illegal top level tag analysis in ‘/usr/sap/<SID>/Download_DIR/UPGANA.xml’ – expected ‘Clone durations’ .” for each run. We tried reusing UPGANA.xml from 3-4 mock runs and everytime same error.

        Also note that we reused these files for same system with same SID. MIGRATE_UT_DUR.XML, MIGRATE_DT_DUR.XML were taken successfully by SUM tool.

        Problem as mentioned above occurred for “UPGANA.XML” file only.

        Could difference in DB size or table growth be the reason behind this?

        Thanks,

        Ambarish

        (0) 

Leave a Reply