Skip to Content
Author's profile photo Tanuj Gupta

DSO Transport and Activation performance issue – adding new fields


• DSO/InfoCube having very large number of entries > 10 million entries
• Database is Oracle
• Customer is adding new fields to the DSO/InfoCube and transporting the changes to the Production system.


Whenever the changes like
a) Adding new key figures
b) Adding new characteristics and dimensions.

are performed on an object (DSO or InfoCube) which already contains data, the activation process has to adjust the table(s) structure. This adjustments process has to alter the table by adding new column and initialize the column to “NOT NULL” and with initial default value. This should work without any problems given the fact that there are enough database resources to perform this change on the database.

Adding not null fields requires checking of the entire “existing” data for null values which requires large amount of space in the transaction log and in the database and comprise many I/O operation. Depending on I/O throughput of your storage it takes hours.

Reason for initializing the newly added column is that from BW reporting and extraction point the column has to be not null and initialized with default value, as BW can not handle null values. General recommendation in such cases where it is required to remodel the object by adding new fields where there is already much data is to follow the below workaround.

The general workaround involves making a copy of the existing data in the cube using datamart interface or Open Hub services. For example
1. Make copy of the existing Cube (Cube1) to Cube2
2. Add new fields to Cube2
3. Load the data from Cube1 to Cube2 using datamart (use Cube2 for further purposes.

If you have some reporting queries defined on Cube1 then need to do the following

1. Make copy of the existing Cube (Cube1) to Cube2
2. Load data from Cube1 to Cube2 using datamart
3. Delete Data from Cube1
4. Add new fields to Cube1
5. Reload the data from Cube2 to Cube1 using datamart

Alternative Solution:

An alternative to this solution is SAP Note 1287382 which observed to be much faster than the normal process.

1287382 Extending BW objects on Oracle with large tables

We have observed a significant performance improvement using this SAP Note although the implementation of this SAP Note, require good application knowledge.

Here are the steps for your reference:

1. Have a list of the large DSOs (identify the size of active table, change log table) and InfoCubes (identify the size of E, F table).
2. Archive or clear change log tables, compress the InfoCubes, minimize the number of records in the tables.
3. Have a list of objects to be transported, identify the dependency of the objects, create the change requests (try to group large DSOs and InfoCubes into separate change request), identify the sequence of the transport.
4. Check the current SP level and implement SAP Note 1287382 and correction mentioned in the SAP Note 1340922.
5. Run program SAP_RSADMIN_MAINTAIN, maintain the RSADMIN parameters as mentioned in the SAP Note 1287382.
6. Schedule background job to run program SAP_NOT_NULL_REPAIR_ORACLE with suitable parameters (table name, parallelism). More parallel processing could be scheduled depending on your hardware capacity. For InfoCubes, the job has to run separately for all the DIM tables and Fact tables.

– Renames the table name (/BIC/AMYODS00) to (/BIC/AMYODS00_OLD)
– New table /BIC/AMYODS00 is created in the system and new fields are added to this new table
– Data from the /BIC/AMYODS00_OLD is copied to the newly created table /BIC/AMYODS00

7. Ensure that Job in SM37 is successfully executed.
8. If any errors occur observed in SM37:

The program fails before copying :- Check whether “/BIC/AMYODS00_OLD” exists. Normally, however, this should not be the case because, if errors occur, “/BIC/AMYODS00_OLD” is renamed “/BIC/AMYODS00”. If the table exists, before the program is restarted, the table “/BIC/AMYODS00” must be dropped (!!! MUST NOT CONTAIN DATA) and “/BIC/AMYODS00_OLD” becomes “/BIC/AMYODS00”.
1.) Drop the new empty table “/BIC/AMYODS00” (using SE14 or SQL: DROP TABLE “/BIC/AMYODS00”)
2.) Rename “/BIC/AMYODS00_OLD” to “/BIC/AMYODS00” (must be done with native SQL )
3.) Update in DD03L at least one column to nullable

The program fails after copying: – If the number of records in “/BIC/AMYODS00_OLD” and “/BIC/AMYODS00” differ.

– ANALYSIS and repetition in accordance with the previous step (The program fails before copying). /BIC/AMYODS00_OLD should have complete data in the table.
The program should not be restarted. Instead, the remaining steps (structuring an index and structuring statistics) are carried out manually after the error analysis.

9. Drop *_OLD tables if the conversion is successful.

If the DSO/InfoProvider is contain more than 10 million entries, the above alternative solution should help to reduce the amount of time significantly.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Renjith Kumar Palaniswamy
      Renjith Kumar Palaniswamy
      Very useful blog for managing performance related issues.
      Author's profile photo Former Member
      Former Member
      Here's another alternative -
      We add fields in Dev environment which has very less data (at least our system). this modifies all the underlying table structures within a reasonable amount of time. Then give DBAs the tables that were modified (active table for DSO, and fact tables for cubes). Ask them to modify these tables in QA system in the same manner as Dev BEFORE you move your transports to QA system under the covers - directly on the database.

      Tools available to DBAs are much more sophisticated and fast than how BW does the same operation. so it goes way way faster.

      Now when you move your transports, BW tries to modify the same table but finds that the underlying tables already have the fields that are to be added. so it does nothing and finishes successfully.

      We used this method to add 48 fields to an DSO with 300.000.000 records and to a cube with 150.000.000 records and it took us less than 12 hrs to do all this. None of the SAP standard/workarounds worked for us with this kind of volume of data.

      we have done this a few times now and it has worked without any issues so far.

      Obviously - nothing is supported by SAP if anything goes wrong !! so try it on your own risk...