DMO: Handling table comparison checksum errors
The following blog provides an introduction and additional information on the table comparison option of DMO: DMO: table comparison and migration tools
This blog focuses on analyzing the results of the table comparison option of DMO and discusses the possible options to handle typical checksum errors.
Basically there are two general cases how differences between the source tables and the target tables can arise:
- New entries in the source tables or in the target tables
- Content changes in the source tables or in the target tables
Differences typically arise due to the following root causes:
- Either the source or the target database tables are directly changed through database interfaces by other systems
- Database inconsistencies
- A job in the source system creates or changes entries in the source tables
- The source system is started manually during the SUM run
Let us have a look at the last example in detail:
During the SUM run the source system is started manually. A user logs on to the system, creates a new user and adds a profile to this user.
If these actions are performed after the affected tables were migrated by DMO but before they are checked by the table comparison, which is executed after all tables have been copied, differences in the content of the tables will be detected.
At this point we have several options to handle the checksum differences:
- Repeat phase: This simply repeats the calculation of the checksums for the tables where different checksums were detected. Usually, just repeating the phase will give us the same result. Anyhow, this option makes sense, if the differences on the tables were fixed manually, for example, by removing the problematic table entries on the source database and on the target database.
To find out, which table entries lead to different checksums, we can have a look at the logfiles EUMIGRATEUTCRCRUN.LOG (in the phase EU_CLONE_MIG_UT_CRC_RUN) or EUMIGRATEDTCRCRUN.LOG (in the phase EU_CLONE_MIG_DT_CRC_RUN).
In our example, we have CRC checksum differences for the table ‘USR04’:
- For a more detailed handling of the tables with checksum differences, we can additionally select the checkbox “Accept errors and repeat phase”. Then the SUM will evaluate the files MIGRATE_UT_CRC_CHECKDIFF.LST or MIGRATE_DT_CRC_CHECKDIFF.LST, which are referred to in the dialogue:
2EETQ170 CRC differences for table 'USR04', key fields: "MANDT" "BNAME" 4EETQ171 Key: I '000' 'DDIC_TEST2'
We can see, that the entry DDIC_TEST2 was inserted (letter I in front of the key).
Other possible letters to describe the difference are U for updated and D for deleted.
If the SUM finds more than 200 different rows in a certain table, the rest of the differences is summed up in blocks:
2EETQ170 CRC differences for table 'USRBF2', key fields: "MANDT" "BNAME" "OBJCT" "AUTH"
4EETQ171 Key: I '000' 'DDIC_TEST' 'S_USER_TCD' '&_SAP_ALL' - '000' 'DDIC_TEST2' 'C_KLAH_BSE' '&_SAP_ALL'
4EETQ171 Key: I '000' 'DDIC_TEST2' 'C_KLAH_BSE' '&_SAP_ALL' - '000' 'DDIC_TEST2' 'S_BRAN_ADM' '&_SAP_ALL'
4EETQ171 Key: I '000' 'DDIC_TEST2' 'S_BRAN_ADM' '&_SAP_ALL' - '000' 'DDIC_TEST2' 'S_LANG_ADM' 'S_LANG_ALL'
4EETQ171 Key: I '000' 'DDIC_TEST2' 'S_LANG_ADM' 'S_LANG_ALL' - '000' 'DDIC_TEST2' 'S_RS_PLST' '&_SAP_ALL'
4EETQ171 Key: I '000' 'DDIC_TEST2' 'S_RS_PPM' '&_SAP_ALL'
4EETQ171 Key: I '000' 'DDIC_TEST2' 'S_RS_PPMAD' '&_SAP_ALL'
4EETQ171 Key: I '000' 'DDIC_TEST2' 'S_RS_RSFC' '&_SAP_ALL'
This is done to keep the log from becoming too verbose. When a table has more than 200 rows with different checksums, there is probably a general issue with this table and we will most likely not try to correct these differences manually.
MIGRATE_UT_CRC_CHECKDIFF.LST # # By choosing "Accept errors" in the repeat dialogue, # the following lines in the list below will be handled as follows: # Line | Repeat cloning | Check again # | table contents | # ----------------------------+----------------+--------------- # <TABLENAME> | yes | yes # #<TABLENAME> | no | yes # <TABLENAME> ignchecks | no | no # # Remarks: # - Commenting out the line is useful if the wrong count/checksum was # fixed on the DB manually. # - In case the cloning is to be repeated, one can change the # behaviour during cloning by adding phrases like 'nosplit' # after the table name. # # Tables with different row counts/checksums: INDX # 1 bad checksum(s) CCMSBITHRH # 1 bad checksum(s) USREFUS # 1 bad checksum(s) TEMSGU # 1 bad checksum(s) USRSTAMP # 1 bad checksum(s) CCMSBIDATA # 1 bad checksum(s) USR04 # 1 bad checksum(s) NRIV # 1 bad checksum(s) ALTSTLOD # 1 bad checksum(s) TUCON # 1 bad checksum(s) CDPOS # 1 bad checksum(s) CCMSBIMETH # 1 bad checksum(s) USRBF2 # 1 bad checksum(s) USRBF3 # 1 bad checksum(s)
The lines will be handled with the following logic:
- The SUM will repeat the cloning of the table contents and check them again for differences for all lines, which just contain the table names. However, special cloning options can be added after the table name, like ‘nosplit’, which will prevent the splitting of the table during the cloning procedure.
- If a # is added in the line directly before the table name, the SUM will only check the table contents again by recalculating and comparing the checksums.
- To completely exclude a certain table from the table content check, we can add a space character and the word ignchecks after the table name. In this case, the table will not be cloned again and the table contents will not be re-checked.
INDX # 1 bad checksum(s) CCMSBITHRH # 1 bad checksum(s) USREFUS nosplit # 1 bad checksum(s) #TEMSGU # 1 bad checksum(s) #USRSTAMP # 1 bad checksum(s) #CCMSBIDATA # 1 bad checksum(s) #USR04 # 1 bad checksum(s) #NRIV # 1 bad checksum(s) #ALTSTLOD # 1 bad checksum(s) TUCON ignchecks # 1 bad checksum(s) CDPOS ignchecks # 1 bad checksum(s) #CCMSBIMETH # 1 bad checksum(s) #USRBF2 # 1 bad checksum(s) #USRBF3 # 1 bad checksum(s)
In this example the tables INDX, CCMSBITHRH and USREFUS are cloned and checked again.
The table USREFUS is cloned with the special option ‘nosplit’, so it will not be splitted during the cloning procedure.
The tables TEMSGU, USRSTAMP, CCMSBIDATA, USR04, NRIV, ALTSTLOD, CCMSBIMETH, USRBF2 and USRBF3 will only be checked again.
The tables TUCON and CDPOS will be ignored and excluded from the checks.
After the execution of “Accept errors and repeat phase”, the file MIGRATE_*_CHECKDIFF.LST will only contain the remaining tables with checksum errors, if any. Otherwise it will be deleted after the successful execution of the phase.
If certain tables have been excluded from the checks by adding ignchecks after the table name, they are listed in a new file *_CHECKIGN.LST.
In our example, it has the following content:
CDPOS igncount igncrcdiffs TUCON igncount igncrcdiffs
It should only be used if instructed to do so by the SAP support.
The table comparison option is useful as a preparation step in non-productive DMO runs to reveal potential issues before a productive run.
However, comparing the table content will extend the downtime, so it is not recommended to enable table comparison for a DMO run on a productive system.
As a safeguard, DMO will in any case compare the number of rows for each table (count *) before and after the migration. This cannot be switched off.
Thank you for explanation. We recently did DMO run on ECC EHP6 on SQL database with target as ECC EHP8 on hana. The source database size is approx . 11 TB with 8.2 TB application table data. We are using latest SUM tool and kernel executables. We choose no table comparison option.
We are concerned about comparison of the number of rows for each table (count *) time, which was 29.5 hours. 99% row count finished in 20 minutes and next 1% ( 2 buckets with set of 500 tables each) took 29 hours.
As row count is still part of downtime, we need to understand why it took such a long time?
What can be done to avoid this long run time? Please note that there were no errors during row count or bucket failures.
We completed application table data migration in 34 hours with approx 200 GB/hour speed.
We are upgrading ERP6.0 EHP7 to EHP8 and migrating database from Oracle12C to Hana 2.0 .
In execution phase ,we are facing below error could you please suggest /help me out, how to resolve it
how did u solve your problems?
having the same too.
Did you managed to go on?
Hi Martin/ Joan,
Did you manage to solve the problem. We are having the same issue, Could you please suggest the approach.
Hi Bhanu Gattu
First , sorry for the late response i was out
Second yes we finished out os/db migration + uc conversion , at this point i read this sap note wich might help you
2348804 - Count* replaces Table Checker for DMO use cases and how to use it for Audit Purposes.
In our case we just ignore the table and continue accepting the error cause it applied to us.
Hope you managed to go on as well
Thanks, Joan for the response.
yes we managed it by accepting the error.
We are doing DMO with system move option from Source (AS400,DB4 SAP Netweaver 7.3, Big Endian ) to Hana SLES on IBM power hardware, Little endian Netweaver 7.5.
We have also received the same error for row count comparison during DMO run in execution phase in file MIGRATE_DT_CHECKDIFF.LST for table PATCHHIST (source has 97 entries for this table and target has 98 entries) and got the result as below
Entries in Source: 97
Entries in Target: 98
# Tables with different row counts/checksums:
PATCHHIST # old: 97 new: 98
We have compared the content of the table in both the systems and found that the target system has all the entries of PATCHHIST table as that of source system plus one extra entry for R3trans 749 Kernel. Except this entry all the entry are same for source and target system. Kindly let us know how to proceed further for this. If this can be ignored?
Thanks & Regards
We have a problem due to different row count in DMO ( # Tables with different row counts). We have more entries in target compare to source. How do we solve the problem?
I too faced the issue but for row count error. you blog helped me with basic information.
I have updated the solution in my blog which may be helpful to the guys who encountered the same issue
Venkata S Pagolu.