How to transport changes to very large ODSes in SAP BW
In almost all BW systems which have been live for 3-4 years , it is a standard sight to see the following :
1. ODSes like Delivery Items , Material Movements etc having millions of entries ( this of course depends on the usage of these functions in SAP )
Very often a change request comes along which requires an enhancement to these ODSes and in lieu of the the time this would take – a lot of sister ODSes spring up all around these mega ODSes leading to a proliferation of Infosets and cubes and Multiproviders.. which in turn lead us into very difficult data modeling choices
The question which needs to be answered first – what does it take to change these ODSes
When you add data fields to an ODS with data – the transport tries to adjust the table with the new fields – this causes the database to COPY the data which exists in the table into the undo tablespace and then try and adjust the table – so that if there is an error – it can rollback. Some of these terms are related to Oracle but the concept is the same.
What this means is that :
When you move a transport that changes a table with 20 million entries – this table is backed up first and then adjusted. So make sure you have enough space on your Undo Table space ( free space greater than or equal to active table size ) before attempting the same.
Another option for doing this :
Before moving the transport – back up the table manually in the database – this involves copying the data from the active table of this ODS into a separate table
Drop the data in the active table , changelog and new data table of the ODS AFTER making sure that all upward data flows are complete
Move the transport in
Fire INSERT statements from the backup table into the changed ODS table – here you can control the processors available etc to make this process faster.
If your ODS is partitioned – take this into account as well – also if you have partitioned the ODS through the DB – when you transport the ODS – these changes are overwritten – you will have to put them back
Once the insert finishes – recreate the indices / rebuild the indices
If possible do a compare between the two tables to make sure that the data matches – then drop the backup table.
You are ready to go. This process might take the same time as moving the transport – but then it gives you greater control.
We tried this for a ODS with about 45 million entries and it worked fine.
Of course – you could argue that the change could be done directly in the Database than going through transports – but then transports is SAP’s way of keeping things together..
And of course you would need a brave DBA…
1. Transports - for the backup DSO you need a transport - for the database copy you do not need one
2. You work directly at the DB level instead of going through the application level
The only downside of my approach is that you end up wiping out the monitor entries in the ODS and you cannot get them back.
Also you will have the trouble of deleting the deltas - if the deletion does not allow you to delete the data - then you will have to delete using selective deletion before you can move the transport.
Create a Table in the BI System and code an ABAP program to load this Table from DSO.
Once your changes moves in, we can easily load the data either through ABAP or through normal way of transformation to this DSO. Though latter would be performance dud if we are talking about multi-million records.
What Say ?
Advantages :
SID Generation , Monitor entries etc all exist
Disadvantages :
Additional application layer introduced
Additional development required and transports and testing 🙁
Instead of doing a custom table - you could do another DSO and load into that...
Very interesting ... I did not know about that ..
Is there any note you can reference to on this ?
I did come across a program which helps in transporting these ODSes faster in BI7 but cannot recal the same right now - this makes the transport of ODSes faster when you are transporting changes to huge tables..
http://www.oracle.com/technetwork/database/focus-areas/performance/sapstorageoptimization-335322.pdf
Look at pg 10 of the PDF file, under section "Dictionary Only Add Column".
The long runtime when transporting large DSOs is actually due to Oracle trying to insert the new field that was added, with a DEFAULT value.
With Oracle 11g, this insert of the default value is bypassed.
Transporting DSOs with millions of records (20 mil, 50 mil, 80 mil) can now be done in a matter of minutes.
No more fiddling around with hacks, SAP programs, or pre-requisite steps for large DSO transports... 🙂
http://www.oracle.com/technetwork/database/focus-areas/performance/sapstorageoptimization-335322.pdf
Section "Dictionary Only Add Column" on pg 10 of the PDF describes this new feature in 11g.
The long runtime in older versions of Oracle was due to the fact that the database was trying to insert the records and populate it with a DEFAULT value.
The new version of Oracle (11g) bypasses this insert.
DSOs with millions of records can now be done in a matter of minutes.
Hi guys, I'm running my BW on ORACLE on version 11.2.0.3.0. Is this 11g database?
thanks,
Yes it is.