Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
stefan_seemann
Employee
Employee

To make it possible to ship SAP-HANA-specifc improvements of software provisioning manager more often than with the quarterly shipment of Software Logistics Toolset (SMP login required), SAP came up with a fast delivery strategy especially for this use case. The second version of this fast delivery method is software provisioning manager 1.0 SP3 PL6, described for migration experts in this blog.

This shipment contains many corrections that should make the migration to SAP HANA handier than before. After you start software provisioning manager, the improvements are available via a new folder in the product catalog:

The folder is available for every database type.

Exporting the Source System

The export peparation routine now only creates the export medium structure. As no DBSIZE.XML is necessary for the migration to SAP HANA, no R3szchk and also no R3ldctl are called during this procedure.

The table splitting preparation has two new features:

  1. The input file tables.txt now also supports the syntax: <TABLENAME>:<max number of rows> - for example:  REPOSRC:1000 means that table REPOSRC will be split every 1000 rows.
    This feature is supported by R3ta already and is now also supported by software provisioning manager.
  2. If, for example, 10 tables were splitted during the first export and you only want to split one additional table or you want to split only some of the tables differently, you only put the relevant tables into the input file.
    All existing WHR files on the export medium listed in whr.txt and not part of the new input file are copied into the ./SPLIT directory and then, after
    all WHR files are deleted on the export medium, moved back onto the export medium. So, you don’t have to run the whole table splitting again just because of the change of one table.

After this table splitting dialog, you get the following new dialog that handles the R3ta_hints.txt file:

  • The default option is the standard behavior that copies the file from <IM>/COMMON/INSTALL into the installation directory.
  • The second option allows you to provide a custom file with a special hint column for specific tables.
  • Using the third option, you can add more than one column to a table. This is for tables that have very less selective columns. Also, non-primary key columns can be used.
    In case more than one column is used, the WHERE clause can get complicated. You better check that the WHERE clause performs well on the source database.


The database instance export routine contains most of the new features. It begins when you choose SAP HANA database as the target database type:

The next dialog is new:

As you can read on the dialog, there are three options to choose from:

  • New export from scratch.
    Choose this option, when you do the first export or if you want to start from scratch.
    All *.EXT files from R3szchk and *.STR files from R3ldctl are copied to the directories EXT and STR on the export medium.
  • Repeat existing export.
    In case an export is already on the export medium, software provisioning manager will run the exact same export again using the STR and WHR files as they are on the medium. TOC and data dumpfiles will be deleted first.
    Choose this option to try different sort orders without running R3szchk, R3ldctl and the splitter tools again.
  • New export, Reuse STR, EXT and WHR files.
    This option uses the EXT and STR files from the directories EXT and STR. Due to this, it is not necessary to run R3szchk and R3ldctl again.

In case you chose New export from scratch or New export. Reuse STR, EXT and WHR files, you will come to this dialog:

It contains one new split option called Number of Tables Limit. This new split feature runs after the other package splitting is done. It takes care that every STR file contains the maximum amount of tables or less. This is mainly for SAP NetWeaver Business Warehouse migrations, where you have thousands of empty tables in one STR file - this would result in that the execution of this package would still take 20 hours without being able to split the package, as it would have no size, because all tables are empty.

Of course, this option can also be used for non SAP Business Warehouse systems.

On the next dialog, if you choose Define Special Package Unload Order...

...you will get this new dialog:

Until now, it was only possible to sort the export by name (which put tables like _BI* at the end of the list) or to manualy create an order_by file. Now, you have three options:

  • The old behavior that sorts by name,
  • Sort by size which uses the EXT files,
  • Sort by time analyzer results.

If you run the export the first time, you want to choose sort by size.

In case you run the export a second time and you have made no changes to the splitting, you can sort the export based on the time analyzer results.

When you choose sort by time analyzer results, you come to this dialog:

So not only the results of the export can be used to create an order_by.txt file for the next run, also the import analyzes can be used. This is because a long running table or package on the import side can take only a few minutes on the export side and due to this might be scheduled too late. So, the top packages from the import side will be put on top of group LARGE to get exported right from the start.

In case you did changes to the splitting, you still can choose the option sort by runtime. In this case, software provisioning manager sorts the packages by runtime and for those which are not in the file, they are sorted by size. Once the export has started, there is a file called order_by_visual.txt that contains the size / runtime data besides the sort order.

These two options try to support the parallel export / import. This means, you start the import as soon as as the export has started.

To provide input to the import side soon, small packages need to start right from the beginning of the export. But as long running packages stretch the export time, also these packages need to start right away. Due to this, we came up with a strategy to build an order_by.txt also for the export, similiar like we already did it for the import. It contains two groups called LARGE and SMALL:

  • Group LARGE is sorted from the biggest / slowest package to the smallest / fastest package.
  • Group SMALL is sorted from the smallest / fastest to the biggest / slowest package.

So right from the start, packages finish very soon (enabling an early start of the import), but also long running packages are started right away to not stretch the export.

The new strategy also moves jobs to keep up the amount of jobs you entered.

The amount of jobs on the export side is steady. They are not increased or decreased regarding the system load like it is the case on the import side.

This replaces the standalone version of createorderby.jar.

Import into SAP HANA

Database Refresh or Move

You can use this procedure to re-install a completely installed system. The system must be accesible during the procedure. So, to be able to use this option, the system must at least be able to start.

Standard System

Migmonctrl got a few new features to keep the number of jobs up at the end and now also fully supports declustering.

Also, this dialog is back:

Maximum number of parallel jobs is the maximum of jobs migmonctrl will schedule. It is mapped to the parameter maximumJobNum of migmonctrl_cmd.properties.

Depending on the load of the database, it might be less if the load is high. Also, this amount is not triggered right from the start. Migmonctrl increases the amount until this amount of jobs is reached.

A tail –f migmonctrl.log during the import provides valuable information about how the tool is working.

In case you do an installation into an SAP HANA database you used before, we recommend using the option Initialize database topology instead of drop user, as it is faster and doesn’t trigger asynchronus garbage collection. But be aware that you loose all data with this option, so your schema should be the only one in this instance!