Skip to Content
Author's profile photo Stefan Seemann

Using SWPM and Oracle’s PL/SQL-Splitter for migrations towards SAP HANA

To improve the performance during export and also the duration of the table splitting itself, Oracle offers a splitting mechanism which is quite fast. The so-called PL/SQL-Splitter uses the ROWIDs of a table to calculate the WHERE-clauses used by the exporting R3load.

As a ROWID represents the address of a record in a table there is no guarantee that it will be the same in the source and the target database.

Due to this the ROWID can only be used to export the data using multiple R3loads in parallel. But R3load uses the WHERE-clause also on the import side in case it is restarted after an error. In this case R3load adds the WHERE-clause to the delete statement to only delete its special subset of data for this table.

That means if a table is imported into SAP HANA in parallel and the WHERE-clauses are based on ROWIDs, the whole table needs to be truncated and loaded again in case one R3load fails. But due to the performance improvement on the export side it might be worth the effort.

This blog post describes a method how to use this splitting mechanism together with the software provisioning manager. The version should be at least SWPM 1.0 SP7 PL4. The scenario needs to be well tested before it is used for a productive migration. Also the case of an error should be tested to get used to the procedure.

Creating the structure of the export DVD

First things first. An empty export medium or export DVD needs to be created. As the source database is Oracle you have to choose Oracle in the Product Catalog.


The target database type of course will be SAP HANA. The rest of the routine is pretty straight forward.

This example assumes that not all tables which are about to be splitted will be splitted with the PL/SQL-Splitter. That means we have to call the table splitting procedure of the software provisioning manager twice. In this case it is important that the PL/SQL-Splitter is called first.

Splitting by ROWID

More information about the PL/SQL-Splitter can be found in OSS-note 1043380 Efficient Table Splitting for Oracle Databases


To get to the PL/SQL-Splitter you have to choose Oracle as target database type.


After the procedure has finished successfully you will find the result on the export DVD. Now if there are other tables which shall be splitted with the standard tools you have to call the software provisioning manager again. The new WHERE files will be added to the existing ones.

Splitting with standard tools


This time you have to add SAP HANA Database as target database type. Otherwise the existing WHERE files will be deleted.


After all splitting is done you should have WHERE files for all your tables and also the file whr.txt needs to contain the names of all splitted tables.

Export of the source database

The export of the source database is the same as for all migrations towards SAP HANA. You can use all possible options.


Installation of the target system

On the target side we have to do a few manual steps.


The new option ‘Omit tables from import’ offers the possibility to exclude tables and packages from the import. We use this feature to exclude the tables which were splitted by the PL/SQL-Splitter. We will import them manually.


In this example I have marked the checkbox “Create indexes afterwards”. This is not necessary. It was just the default.


Once the import is running we can simply grep the commandline from the running processes. We can see that the migration monitor is called with a few additional options:

  • -omit V
    That means the views will not be created in this run. They will be created when all tables are imported. Otherwise we might run into errors when the excluded tables are not ready.
  • -packageFilter
    This file contains the tables or packages which are excluded.
  • -filterUsage omit
    That means the tables in the file are omitted.
  • -onlyProcessOrderBy true
    Only the tables and packages in the file order_by.txt are processed by the migration monitor not all packages which are on the export DVD. Due to this the excluded tables won’t be processed here.

Now it is time for some manual action.

Importing the excluded tables

As the import is done by sidadm we have to switch to that user to execute the following commands.

  • > su – trgadm
  • > mkdir import_RowID_Tables
    > cd import_RowID_Tables
    This will be the installation directory for the manual import.
  • The following files and directories need to be copied into the new directory.
    > cp -r ../importSWPM/sapjvm .
    > cp ../importSWPM/ .
    > cp ../importSWPM/  .
    > cp ../importSWPM/DDLHDB.TPL .
    > cp ../importSWPM/ngdbc.jar .

After copying the files we have to adapt the properties file of the migration monitor to the new directory. I use ‘.’ instead of the absolute path.


  • Now we have to extract the migration tools into the directory to be able to import the tables by hand.
    > SAPCAR -xvf <IM_LINUX_X86_64>/COMMON/INSTALL/MIGMON.SAR migmon.jar migmonctrl.jar
  • Now we can already start to import the excluded tables. As we have chosen to create the indexes later we have to omit them together with the Views during this run (-omit PIV). We will also use the same package filter but the usage is now exclusive.

    > setenv JAVA_HOME ./sapjvm/sapjvm_6
    > ./ -sapinst -host ls9291 -instanceNumber 00 -username SAPTRG -password <Password> -databaseName OQ1
    -omit PIV -packageFilter /sapdb/exportQO1/RowID_Tables.txt –filterUsage exclusive -onlyProcessOrderBy true


Now, what do we have to do if one of these jobs run into an error ? Keep in mind the WHERE clauses cannot be used. That means the whole table has to be truncated. But also the R3load tasks and the import state of the migration monitor have to be adjusted properly.


In this example two jobs for table REPOSRC have failed. That means we have to load the whole table again. This also means that all R3load jobs for this table need to be in status finished or failed before we can truncate the table and reset the status of all import jobs.


In this example the different jobs for table REPOSRC have all kind of status:

  • REPOSRC-1, REPOSRC-10 and REPOSRC-post
    Status 0 means they are not yet processed.
    Status ? means they are in process right now.
    Status – means an error occured.

The first thing now is to prevent all jobs with status 0 from being executed by the migration monitor:

sed -i “/REPOSRC/s/=0/=+/g”

Now all unprocessed jobs for table REPOSRC have status ‘+’, which means they won’t be processed by the migration monitor.

The next step is to kill all running R3load jobs for this table. Don’t use kill -9 to give the process the chance to end itself properly. After that all jobs for this table should have the status – (failed) or + (ok).

The next step is to truncate the table and to delete all TSK files which belong to this table.

sidadm> hdbsql -U DEFAULT truncate table REPOSRC

sidadm> rm REPOSRC-*__TPIVMU.TSK*

Now we have to reschedule the jobs:

sed -i “/REPOSRC-post/s/=+/=0/g”

sed -i “/REPOSRC-[^p]/s/=[-,+]/=0/g”


In case you don’t want to do this while the migration monitor is running because it is still working on other tables. You might want to create one installation directory for each table you want to import this way. By doing this, each table will has its own migration monitor which will abort once all jobs for this table have failed.

After the jobs are rescheduled they hopefully will finish successfully. Then we can switch back to the software provisioning manager.


From the pop up we see that all the other jobs have already finished. Once we have accomplished to import our excluded tables we can press ok. Now the Indexes for table REPOSRC and T100 and finally the Views will be created.


After the import the package checker is executed. It compares the packages on the export DVD with the logfiles in the directory of the software provisioning manager to make sure everything is imported. Now as we have imported some tables in different directories we will get a message saying that some packages might not have been loaded.


But we know that everything is fine so we press the OK button. Now the installation should cruise to an successful end.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      Stefan, really helpful article, thanks.Ken

      Author's profile photo Former Member
      Former Member

      Hi Stefan

      I have one question for Oracle PL/SQL splitter tool. I don't think we can use Oracle

      Row-ID based split using PL/SQL script ( as per OSS note 1043380 - Efficient Table Splitting for Oracle Databases ) for HANA migration. I have used the PL/SQL based script - however it did not work for HANA. Because ROW-ID in Oracle is different from HANA. So finally I have used R3ta splitter. Please let me know whether you used R3ta based splitting or native Oracle PL/SQL based spliting ?



      Author's profile photo Warren Chirhart
      Warren Chirhart

      Hello Dipam,

      I recently used the Oracle Splitter for our Migration away from Oracle to HANA (SPS10). I did this simply because we had too many tables to split and R3ta was taking to long to generate the splits. I had no problems with it. We had a consistent and well-running instance after the migration. It's true that HANA has no idea about Oracle ROW-IDs, but there are other options, and this should not cause any problems during the Import. What problems/errors did you experience?

      As the note mentions, if one or more of the table splits has issues, the tool cannot remove just that set of rows from the table before trying to reimport the split package(s). However, it's my experience that this is highly inefficient, no matter what DB you are using. I have wasted hours of my life waiting for the rows from a split to be removed from a database only to be retried. I was never patient enough to wait for it/them to finish. It was faster to drop the table and reimport all of the splits. On HANA this is especially true.

      I should mention that I did not use ROWID for the splits. I used the "FOO" option. Instead of specifying ROWID, I specified FOO and let Oracle choose the correct key field. I was not able to split all tables this way. Partitioned tables worked well with FOO, transparent tables errored out. For the transparent tables, I used the key field that was identified when I was trying to use R3ta during my test runs.

      Kind regards,

      Warren Chirhart

      Author's profile photo Former Member
      Former Member

      Hello Warren,

      What do you mean by 'FOO' Option. Apologize I couldn't understand it.

      I agree ROWID suppose to be used where splitting is matter of few sec/min. not hours where time is very critical.


      Amit Lal