Skip to Content
Author's profile photo Naveen Garg

Advanced Migration Techniques using Distribution Monitor

We can use advanced migration techniques and use the additional infrastructure available to reduce the downtime and make system available to end users very fast.

Below are the few techniques which we need to consider for reducing the downtime:

1. We should do the package splitting, as more the number of packages more we can use R3load processes to execute these packages and reduce the downtime

2. We can perform table splitting so that one table can be split into large number of packages and can be processed via multiple R3load processes

3. We can use Distribution Monitor if we have sufficient numebr of resources (Additional App servers) to perform the maigration which we reduce the overall downtime

4. You can consider the option of using parallel Export and Import provided your Target Database is ready

Points to be noted before launching the Distribution Monitor:

1. You should use Java 1.6

2. You can use only Net variant while using the Distribution Monitor, FTP Variant is not supported yet by Distribution Monitor

3. You should have additional hardware resources

4. The Distribution Monitor does not support system copies of releases lower than SAP_BASIS 6.20.

5. The Distribution Monitor only perform the export and import for ABAP based system for Java system you can use SAPINST to perform export and import

Tools in the Distribution Monitor Package


Distribution Monitor is delivered with following tools :


  •  Distribution Monitor
  •      MIGMON
  •      Package Splitter
  •    Time Analyzer

How to use the Migration Monitor:

To use the Migration monitor there are a few requirements which we need to consider in a customer landscape

1. We should have sufficient resources available (additional application servers)

2. We should have resources on Database servers and we should increase the oracle sessions as per requirement and resources availabel on DB server

3. We should have sufficient space on source database to increase the temp table spce to accomodate the parallel run of R3load processes. Ususally as per SAP recommendations we should have PSAPTEMP (or TEMP) table space as 20% of used DB, say your DB size is of 1 TB then you should try to increase TEMP table space to 200 GB

4. Please ensure that you use the sufficient R3load processes, as a thumb rule SAP suggests to use 1 to 3 R3load processes for Export per CPU and 1 to 4 R3load processes for import per CPU, so before starting the export and import you should know the hardware capacity to proceed.

5. You need to set the DB environment setting in the profile so that you can connect to source and target DB respectively.

6.  It is strongly recommended to use R3load data files (<host>.dataDirs) and R3load control files (<host>.exportInstallDir, <host>.importInstallDir) only on local file systems. NFS-mounted file systems is not recommended by SAP as it fails on high parallel load.

7. On each host: set commDir in distribution_monitor_cmd.properties

8. Do not modify the template files

9. On the host where you want to run the preparation step: enter the options in distribution_monitor_cmd.properties file (and further options if you like)

10.  Run the preparation step: distribution_monitor –p

11. After running the preparation mode for the first time it is recommended to save results of R3ldctl, R3szchk, PkgSplit and (optional) R3ta step and set the corresponding skip options in distribution_monitor_cmd.properties. This enables repeating parts of the preparation without repeating unnecessary steps

12. After changes to table structures R3ldctl and R3zchk have to be repeated

13. If export and import run on separate databases Prepare the target database .Start export and import on each host in parallel , else if export and import run on same database. Start export on each host. After all exports have finished drop source database. Prepare the target database .Start the import on each host

14. During export and import you can monitor the state with the display mode (distribution_monitor –d)

Assigned Tags

      6 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Chris Kernaghan
      Chris Kernaghan

      This is not a bad blog, although you should have used the research process to extend your knowledge rather than just replaying a very narrow view on a process.

      Where you could have done better in my opinion

      1. You do not need to exclusively use application servers to increase the throughput of the export/import process - any server will do, as long as it has the database client on it and a disk area capable of taking the export files or importing the files from.

      2. You talk about Oracle only, there are several other database choices, MS SQL server, IBM DB2, Sybase ASE - but you do not mention these even though their architectures are not dissimilar to Oracle. This would be a much better blog if you did some more research and applied your good Oracle learnings to those databases.

      3. Your CPU calculations are a little high for my liking and you need to differentiate between CPUs and Cores. My preference is 2.5 R3Load processes per core on Export and 3 R3Load processes per core on Import (but only on Sorted Exports). On Unsorted loads then you can reverse this calculation as the Import is heavier than the export.

      4. Diagrams are very useful for explaining concepts quickly, if you want examples, please have a look at my blog series on CUUC

      Keep on blogging

      Chris

      Author's profile photo Naveen Garg
      Naveen Garg
      Blog Post Author

      Hi Chris,

      Thank you for the comments. This blog I have created based on the experience I had with one Migration , but yes as you mentioned we can extend it further.

      Regards

      Naveen

      Author's profile photo Sudip Saha
      Sudip Saha

      Dear Naveen / Chris,

      We are doing OS migration from HP unix to Linux. The DB will be just same but upgrading from 11G to 12G. We have source, target and two unix machines for application server (node). We have done the preparation mode in source system and files are generated in /sap/commDir directory of source system and this directory we mounted in the Target linux server and two apps servers/host. The target will be newly built so my questions are below-

      1. Do we need to execute SWPM for the database instance installation and then stop SWPM in between and run distribution monitor like we do manual migration monitor steps.

      2. In the apps server host no SAP is running and we copied R3load and other R3* files in the apps hosts. So, how we will run the distribution monitor in apps hosts? Do we need to execute ./distribution_monitor -e (export) in apps server? or in Source?

      3. Same question for import. Where we will run the import? In apps host or Target system?

      I mean where I will execute, distribution_monitor.sh -e (export) and distribution_monitor.sh -i (import)

      4. From the Apps servers/hosts the source database should be accessible right? How we can access the source database from apps host? using tnsname.ora entry?

      There is no step by step document available in SMP, at least clear steps how to do.

      Your quick answer will be highly appreciated.

      Author's profile photo Naveen Garg
      Naveen Garg
      Blog Post Author

      Hi Sudip,

      Please find the comments below:

      1. Do we need to execute SWPM for the database instance installation and then stop SWPM in between and run distribution monitor like we do manual migration monitor steps.

      Yes, choose option Run Migmon manually during parameter input and it will stop

      2. In the apps server host no SAP is running and we copied R3load and other R3* files in the apps hosts. So, how we will run the distribution monitor in apps hosts? Do we need to execute ./distribution_monitor -e (export) in apps server? or in Source?

      Please use the Distribution Monitor user guide to setup target systems. You need to setup in such a way from same host you can communicate to both source as well as target server.

      • One session setup environment for <sidadm> user which has TNS of source system and kernel of source system release copied on target server (Use latest R3 tools).
      • Second session setup environment for <sidadm> user which has TNS of target system and kernel of target system release copied on target server (Use latest R3 tools).

      3. Same question for import. Where we will run the import? In apps host or Target system?

      I mean where I will execute, distribution_monitor.sh -e (export) and distribution_monitor.sh -i (import)


      Please use the info provided for point 2 and setup your app servers as per distribution monitor guide

      4. From the Apps servers/hosts the source database should be accessible right? How we can access the source database from apps host? using tnsname.ora entry?


      One session setup environment for <sidadm> user which has TNS of source system and kernel of source system release copied on target server (Use latest R3 tools).

      Hope this will help you in setup of DISTMON.

      Thanks

      Naveen Garg

      Author's profile photo Sudip Saha
      Sudip Saha

      /wp-content/uploads/2016/05/distmon_962390.jpg

      Author's profile photo Sudip Saha
      Sudip Saha

      All the environment parameters I configured in the apps server as same as source system. But while testing the R3load and R3trans I am getting error.

      => /sap/nobackup/sudip/kernel/R3trans -d

      This is /sap/nobackup/sudip/kernel/R3trans version 6.25 (release 742 - 06.05.15 - 20:15:05).

      unicode enabled version

      2EETW169 no connect possible: "maybe someone set invalid values for DIR_LIBRARY ('/sap/nobackup/sudip/kernel') or dbms_type ('ORA')"

      /sap/nobackup/sudip/kernel/R3trans finished (0012).

      => /sap/nobackup/sudip/kernel/R3load -testconnect

      /sap/nobackup/sudip/kernel/R3load: START OF LOG: 20160527090855

      (BLD) INFO: sccsid "@(#) $Id: //bas/742_REL/src/R3ld/R3load/R3ldmain.c#8 $ SAP"

      (BLD) INFO: kernel release 742 [UNICODE]

      (BLD) INFO: data format 1.8

      (BLD) INFO: patch number 111

      (BLD) INFO: compiled on "Jun 29 2015 22:52:10"

      (PRC) INFO: working directory "/sap/nobackup/sudip/Distribution_monitor"

      (PRC) INFO: called "/sap/nobackup/sudip/kernel/R3load -testconnect"

      (PRC) INFO: process id 5931

      (DB) ERROR: DbSlControl(DBSL_CMD_IMP_FUNS_SET) rc = 20

      (DB) ERROR: DbSlErrorMsg rc = 20

      /sap/nobackup/sudip/kernel/R3load: job finished with 1 error(s)

      (STAT) DATABASE times: 0.018/0.010/0.010 100.0%/100.0%/100.0% real/usr/sys.

      /sap/nobackup/sudip/kernel/R3load: END OF LOG: 20160527090855

      I configured the below env parameters-

      export SAPSYSTEMNAME=SOURCE SID

      export TNS_ADMIN=/usr/oracle/11.1.0.7_Client/network/admin

      export DIR_LIBRARY=/sap/nobackup/sudip/kernel

      export DB_SID=RES

      export ORACLE_SID=SOURCE SID

      export dbs_ora_tnsname=SOURCE SID which is mentioned in the tnsnames.ora

      export PATH=$PATH:/usr/oracle/11.1.0.7_Client/bin

      export LD_LIBRARY_PATH=/sap/nobackup/sudip/kernel

      Regards,

      Sudip