To improve the performance during export and also the duration of the table splitting itself, Oracle offers a splitting mechanism which is quite fast. The so-called PL/SQL-Splitter uses the ROWIDs of a table to calculate the WHERE-clauses used by the exporting R3load.
As a ROWID represents the address of a record in a table there is no guarantee that it will be the same in the source and the target database.
Due to this the ROWID can only be used to export the data using multiple R3loads in parallel. But R3load uses the WHERE-clause also on the import side in case it is restarted after an error. In this case R3load adds the WHERE-clause to the delete statement to only delete its special subset of data for this table.
That means if a table is imported into SAP HANA in parallel and the WHERE-clauses are based on ROWIDs, the whole table needs to be truncated and loaded again in case one R3load fails. But due to the performance improvement on the export side it might be worth the effort.
This blog post describes a method how to use this splitting mechanism together with the software provisioning manager. The version should be at least SWPM 1.0 SP7 PL4. The scenario needs to be well tested before it is used for a productive migration. Also the case of an error should be tested to get used to the procedure.
Creating the structure of the export DVD
First things first. An empty export medium or export DVD needs to be created. As the source database is Oracle you have to choose Oracle in the Product Catalog.
The target database type of course will be SAP HANA. The rest of the routine is pretty straight forward.
This example assumes that not all tables which are about to be splitted will be splitted with the PL/SQL-Splitter. That means we have to call the table splitting procedure of the software provisioning manager twice. In this case it is important that the PL/SQL-Splitter is called first.
Splitting by ROWID
More information about the PL/SQL-Splitter can be found in OSS-note 1043380 Efficient Table Splitting for Oracle Databases
To get to the PL/SQL-Splitter you have to choose Oracle as target database type.
After the procedure has finished successfully you will find the result on the export DVD. Now if there are other tables which shall be splitted with the standard tools you have to call the software provisioning manager again. The new WHERE files will be added to the existing ones.
Splitting with standard tools
This time you have to add SAP HANA Database as target database type. Otherwise the existing WHERE files will be deleted.
After all splitting is done you should have WHERE files for all your tables and also the file whr.txt needs to contain the names of all splitted tables.
Export of the source database
The export of the source database is the same as for all migrations towards SAP HANA. You can use all possible options.
Installation of the target system
On the target side we have to do a few manual steps.
The new option ‘Omit tables from import’ offers the possibility to exclude tables and packages from the import. We use this feature to exclude the tables which were splitted by the PL/SQL-Splitter. We will import them manually.
In this example I have marked the checkbox “Create indexes afterwards”. This is not necessary. It was just the default.
Once the import is running we can simply grep the commandline from the running processes. We can see that the migration monitor is called with a few additional options:
- -omit V
That means the views will not be created in this run. They will be created when all tables are imported. Otherwise we might run into errors when the excluded tables are not ready.
This file contains the tables or packages which are excluded.
- -filterUsage omit
That means the tables in the file are omitted.
- -onlyProcessOrderBy true
Only the tables and packages in the file order_by.txt are processed by the migration monitor not all packages which are on the export DVD. Due to this the excluded tables won’t be processed here.
Now it is time for some manual action.
Importing the excluded tables
As the import is done by sidadm we have to switch to that user to execute the following commands.
- > su – trgadm
- > mkdir import_RowID_Tables
> cd import_RowID_Tables
This will be the installation directory for the manual import.
- The following files and directories need to be copied into the new directory.
> cp -r ../importSWPM/sapjvm .
> cp ../importSWPM/import_monitor_cmd.properties .
> cp ../importSWPM/migmonctrl_cmd.properties .
> cp ../importSWPM/DDLHDB.TPL .
> cp ../importSWPM/ngdbc.jar .
After copying the files we have to adapt the properties file of the migration monitor to the new directory. I use ‘.’ instead of the absolute path.
- Now we have to extract the migration tools into the directory to be able to import the tables by hand.
> SAPCAR -xvf <IM_LINUX_X86_64>/COMMON/INSTALL/MIGMON.SAR dyn_control_import_monitor.sh migmon.jar migmonctrl.jar
- Now we can already start to import the excluded tables. As we have chosen to create the indexes later we have to omit them together with the Views during this run (-omit PIV). We will also use the same package filter but the usage is now exclusive.
> setenv JAVA_HOME ./sapjvm/sapjvm_6
> setenv HDB_MASSIMPORT YES
> ./dyn_control_import_monitor.sh -sapinst -host ls9291 -instanceNumber 00 -username SAPTRG -password <Password> -databaseName OQ1
-omit PIV -packageFilter /sapdb/exportQO1/RowID_Tables.txt –filterUsage exclusive -onlyProcessOrderBy true
Now, what do we have to do if one of these jobs run into an error ? Keep in mind the WHERE clauses cannot be used. That means the whole table has to be truncated. But also the R3load tasks and the import state of the migration monitor have to be adjusted properly.
In this example two jobs for table REPOSRC have failed. That means we have to load the whole table again. This also means that all R3load jobs for this table need to be in status finished or failed before we can truncate the table and reset the status of all import jobs.
In this example the different jobs for table REPOSRC have all kind of status:
- REPOSRC-1, REPOSRC-10 and REPOSRC-post
Status 0 means they are not yet processed.
- REPOSRC-2, REPOSRC-4, REPOSRC-6
Status ? means they are in process right now.
- REPOSRC-3, REPOSRC-5, REPOSRC-8
Status – means an error occured.
The first thing now is to prevent all jobs with status 0 from being executed by the migration monitor:
sed -i “/REPOSRC/s/=0/=+/g” import_state.properties
Now all unprocessed jobs for table REPOSRC have status ‘+’, which means they won’t be processed by the migration monitor.
The next step is to kill all running R3load jobs for this table. Don’t use kill -9 to give the process the chance to end itself properly. After that all jobs for this table should have the status – (failed) or + (ok).
The next step is to truncate the table and to delete all TSK files which belong to this table.
sidadm> hdbsql -U DEFAULT truncate table REPOSRC
sidadm> rm REPOSRC-*__TPIVMU.TSK*
Now we have to reschedule the jobs:
sed -i “/REPOSRC-post/s/=+/=0/g” import_state.properties
sed -i “/REPOSRC-[^p]/s/=[-,+]/=0/g” import_state.properties
In case you don’t want to do this while the migration monitor is running because it is still working on other tables. You might want to create one installation directory for each table you want to import this way. By doing this, each table will has its own migration monitor which will abort once all jobs for this table have failed.
After the jobs are rescheduled they hopefully will finish successfully. Then we can switch back to the software provisioning manager.
From the pop up we see that all the other jobs have already finished. Once we have accomplished to import our excluded tables we can press ok. Now the Indexes for table REPOSRC and T100 and finally the Views will be created.
After the import the package checker is executed. It compares the packages on the export DVD with the logfiles in the directory of the software provisioning manager to make sure everything is imported. Now as we have imported some tables in different directories we will get a message saying that some packages might not have been loaded.
But we know that everything is fine so we press the OK button. Now the installation should cruise to an successful end.