Skip to Content

Optimize export/import using Distribution Monitor : Part I

Part I : Optimize export/import using Distribution Monitor

  1. Introduction to Distribution Monitor 
  2. Basis of optimization

  1. Introduction to Distribution Monitor 

Distribution Monitor is one of the most preferred tools to perform export/import for Unicode conversion or OSDB migration for large database with very limited downtime window. You should consider using Distribution Monitor, when you do not have options like “zero downtime migration”, service offered  by SAP or proprietary storage migration tools from storage vendor like EMC. The biggest advantage of using Distribution Monitor is parallel export/import which will reduce your total export/import time drastically and transfer the load from source/target to external node, so that export/import run can use the processing power of external systems (called node or Application Server). However configuration of Distribution Monitor takes a lot of time and effort. Distribution Monitor itself is not an executable or binary, it is consisting of set of scripts and parameter files; which is internally based on SAP tools – R3load, R3szchk, R3ldctl and R3ta. You can download Distribution Monitor from SMP. In the following section I will explain how Distribution Monitor works:


While configuring Distribution Monitor, you will configure more than one “Nodes”, also called “Application Server”. In this example – I am showing four nodes. Each node has been configured with some local storage or SAN storage attached to the node, both export and import process has read/write access to this storage location. Typically, export process in each node will dump (write) data from source system  into the local storage of the node and import process will read data and write back to Target system database.  Parallel export/import processes will run simultaneously across all four nodes and each node will handle different set of “packages” or tables.

2. Basis of optimization

In the next section, I will explain the basic technique to optimize export/import using Distribution Monitor; the first question is what to distribute?  In a simple language, you can say – distribute load among various nodes.

Starting point of load distribution is the list of biggest database tables and SAP packages. So find out the list of first biggest ( say hundred) tables from the database. For very large (SAP) database, it is a general trend that first 50 -100 biggest tables represent 60 – 80 % size of the total database. In the following example.  I found 70 biggest tables (with total size of 2.5 TB ) represent 84% of the total database ( 3.0 TB) size.

Next question will be,  how to distribute these tables and SAP standard packages. You need to gather all facts and figures. I have outlined my finding as follows:

a)    There are  four nodes available, so my target is to distribute approximately 750 GB load per node (as my total DB size is 3.0 TB = 4 x 750 GB).

b)    The biggest 18 tables representing the half of the database size will influence the entire export/import.

c)    Split all tables more than 20 GB size. In this example, there are 18 transaction tables and 3 cluster tables – bigger than 20 GB size. So each split chunk will be max 20 GB of size.

d)    There are 52 tables bigger than 2 GB and less than 20 GB  size and their total size is  750 GB.

e)    List down all SAP standard packages and their sizes (excluding, 70 biggest tables from step b and d).

Summary of the distribution:

Based on all facts and figures, I have the following distribution :


i.         Distribute the biggest 18 tables between first two nodes ( Node1 and Node2) – split all tables with max chunk size of 20 GB, use incremental index creation option for each chunk.

ii.        All tables smaller than 20 GB and bigger than 2 GB, put into “Node-3” . These are non-split tables and use “loadprocedure fast” option during import. This is a faster way of importing data then usual insert statement.

iii.       Do not mix up spilt tables and non-split tables, as most of the database does not support incremental index creation for non-split tables, moreover direct load option using “loadprocedure fast”  is different for split and non-split tables for most of the databases.

iv.       Direct load using “loadprocedure fast LOAD” has some overheads. It needs to perform some pre and post work. If you use direct load option for all standard SAP packages, consisting of all other tables (except 18 split tables + 52 direct load tables), it will result in large amount of overheads. To avoid this situation, do not use direct load option “loadprocedure fast LOAD” for standard packages. So put all SAP standard packages into a single node ( Node 4 ) and import will perform through insert operation.

v.          Along with the SAP standard package on “Node -4”, put all split cluster tables as most of the database does not support direct load import for cluster tables.

I have given you a particular example with some basic guide lines. Every migration is different, so gather all facts and figures to set your distribution accordingly. I will publish the “Part -II : export/import configuration using Distribution Monitor”.

You must be Logged on to comment or reply to a post.
  • Hi Depam,

    I have gone through Export/Import using Distribution Monitor Part I and Part II.

    Eagerly waiting Part III.


    Sandeep Singh

  • Hi

    Need some assistance.. My DB is 3 TB and I am running  a normal Migration export i.e

    1. Export prep

    2. Table Split Prep

    3. DB export

    Duration of initial runs

    1st export run – 50hrs (table DBTABLOG 21hrs):


    THIS 2nd  export run after archive of dbtablog – 34hrs (table DBTABLOG 7.5

    I have old hardware so I feel my restrain is on in the source hardware.. I would like to use migmon and  do the export and import together and also maybe use distmon as I have 2 apps servers on my production server..(source and target)

    Can this be done and if so any suggestions.

    Is it possible for me to mail you


    • Hi

      DBTABLOG is a spacial table – I faced similar kind of issue with this table. Best way is to reorg this table and split the table for export/import.

      You can use MigMon for Parallel export import. Since your are planning to use tow App servers – you can setup for DistMon too.

      If you have further question – let me know.



      • Hi Dipam

        Thanks for that info..I did an archibe and also split the table.. Has redduced the runtime on dblog

        Need some clarification

        My bottle neck is still in my export – as my import takes 7 hours ..on one server ( which is enought time for us)

        I have done a test useing Distmon ( 1 db server and two apps servers) I have exported  my db useing distmon  and have reduced the export to 25 hours, But Now I have read the note 855772  and read the  DistributionMonitorUserGuide.doc which says the following

        4.1.1 NW04

          1. Install the Central Instance.
            1. Run the installation of the Database

                   Attention: If  you want to start the
                   installation of the target system before the export of the source system
                   has been started, make sure that at least the files <importDir>/LABEL.ASC

                   <importDir>/DB/<your database>/DBSIZE.{TPL|XML}

                   <importDir>/DB/DDL<your database>.TPL

                   exist and contain the correct

              1. Select the SAPinst option “Installation
                     using Migration Monitor”.
                1. After the exit step, stop SAPinst.
                  1. Start the DM Import.
                  1. After all packages have been loaded
                         successfully, restart the installation tool and finish the installation.

                  Problem with this is that when doign a system copy for NON-unicode, Master DVD doe s not allow youto install system copy from ZERO with the non- unicode option as per note 1571295 says

                  Can I still use DM on my source and combine the the 3 export directoreis into one directory and use the SWPM Tool to do teh imnport i.e

                  ,.sapinst >> SAPerp 6.0 >>> Sytem copy tools >> Target System .. Central system install abap ..



                  • Hi Morgs,

                    You can use DistMon and MigMon for non-unicode system . Key things will be setting up database code page during DB installation. Say for example your OS level code page is UTF8 ( that’s a unicode code page ) than your database as well as your DistMon and MigMon will take this default code page. That means you need to explicitly specifife “codepage = xxxx” property in export command file and import command file.



                    • Hi Dipam

                      Appreciate the response. Just  confused

                      Thansk for that as well. I have the datacodepage and dbcodepage set to 100 on my source system DIstmon distribution_monitor cmd properties file.So.. When If i do install on the target side the Distmon way and install my CI first as above in step 4.11.. It greys out the option for non -unicode option. Do I go ahead and choose the unicode option…althought i have set my db export to codepage 1100..

                      Screen- NON Unicode.jpg

                      And if so.. My db export now sits locally on 3 local drives.. i.e apps1 apps 2 and db server.

                      My import is on a different server.(new Faster server and my legacy serevrs are too slow) . and I only have a db server.. Can I still use DIstmon although its just one server



                      Screen- NON Unicode.jpg