Skip to Content

After migrating my career from ORACLE to SAP, I faced a situation that wasn’t that new for me: adding a new NOT NULL field to a large table located in an ORACLE 10g database.

 

 

ORACLE

 

One of my first experiences with ORACLE modeling was in a large institution with many schools spread around the state. As expected, in the database we had a table called STUDENTS with over a million records.

 

One day, I felt the necessity of adding a new field that had a default value and wouldn’t accept NULL. As so many users accessed it all the time, I couldn’t make this change during the normal office hours because this table was always locked due to insertions, readings and updates. In this case, I had to save our script and send it to the DataBase Administrator, so that he would run it during the night.

 

I sent to the DBA a simple alter table statement as it follows:

 

 

ALTER TABLE STUDENTS

            ADD NEW_FIELD NUMBER(1) NOT NULL DEFAULT 0;

 

 

As you probably expect, the DBA got really mad at me, replied my e-mail with some explanations about this statement and requested some changes in the script. And that’s how I learned how expensive this change was!

 

There were two main problems with this statement: the “NOT NULL” and the “DEFAULT 0″:

 

  • 1- The NOT NULL constraint. When this is set to a field in a table, the ORACLE database will check every single record in it to make sure the constraint can really be applied.

 

If we take a simpler example, let’s say the table STUDENTS already had the field DATE_BIRTH and it was NULLABLE, i.e., the constraint NOT NULL wasn’t set to it. If one day I decided to alter the table making this field NOT NULL, whenever I ran the statement:

 

 

ALTER TABLE

          MODIFY FIELD DATE_BIRTH NOT NULL;

 

 

… ORACLE would check if any student had the field DATE_BIRTH set to NULL in this table. If at least one record had, this change wouldn’t be possible.

The more records the table has, the more checks the system will perform and the more expensive and long this operation will be. Believe me, between 1-2 million records was already expensive in the situation I was! 

 

Adding a new and NOT NULL field isn’t different. The DB will check all the records currently present anyway.

 

The problem is, if you try to run the statement:

 

 

ALTER TABLE STUDENTS

            ADD NEW_FIELD NUMBER(1) NO NULL DEFAULT 0;

 

 

… without adding a default value, if the table already has records, the new field will be added to them with the value NULL, and that would be inconsistent according to the constraint NOT NULL.

 

If the table doesn’t have any record, you will successfully run the statement, as there will be no record to check.

 

 

  • 2- The assignment DEFAULT 0. Whenever you assign a default value to a field, the DB will run an UPDATE statement for it. Let’s say, something like:

 

UPDATE STUDENTS

      SET NEW_FIELD = 0

 WHERE NEW_FIELD IS NULL;

 

 

When you are adding a new field with a default value to a table, this update statement will affect all its records. Depending on how big your table is, imagine the amount of space that will be required in the rollback area?

 

So, in that simple situation, I could compose a workaround that would work as it follows:

 

  • 1- Created new field in the table as NULLABLE and without any default value.
  • 2- Ran an update in blocks of 10.000 records, always followed by a COMMIT statement, setting NEW_FIELD = 0. This COMMIT statement would empty out the rollback area.
  • 3- Assign DEFAULT 0.
  • 4- Set the field as NOT NULL, whenever it was appropriate. No, It couldn’t be avoided in this workaround.

 

 

The DBA actually wanted me to avoid the NOT NULL and controlled this constraint in the application level. However, I couldn’t agree with that, because some key users and also the developers had access to SQL Plus and could generate inconsistent records. Table triggers could be used, but it’s pretty easy to deactivate them – as we can see, altering a table is much more complicated ;).

 

This workaround wasn’t good enough.  Since the NOT NULL won’t be checked in an empty table, I composed a second option:

 

  • 1- Create a new table based on the old table STUDENTS, plus the new field. In this case the name of the table could be STUDENTS_new, for example. For this empty table, the NOT NULL constraint wouldn’t be checked and no default value would have to be assigned.
  • 2- Run a PL/SQL script with a loop that would copy the records from STUDENTS to STUDENTS_new in blocks, with a COMMIT between the blocks. Again, empting out the rollback area.
  • 3- Rename the table STUDENTS to STUDENTS_old and renamed the table STUDENTS_new to STUDENTS.

 

 

SAP Business Warehouse

Now that I work with SAP Business Warehouse, I have faced a much more complex situation: transport of changes in “bulky” BI DSOs or InfoCubes!

 

We all know BW objects can be HUGE, and applying simple changes to them may be not that easy sometimes. Gladly, SAP BW was made and is often adapted to handle large amount of data.

 

Something interesting in SAP BW and in other SAP systems is that no field can be created as NULLABLE. All Key Figures and Characteristics are created as NOT NULL DEFAULT ‘ ‘ or DEFAULT ‘0′. The reason for that is if you try to make calculations or JOINs with a NULL value, it wouldn’t work as probably expected.

 

I’m not an expert in ABAP, but I know that, in this language, you can’t use NULL that freely, primary variables won’t accept NULL and you don’t have a “sometimes- life-saver” NVL to count on. In the DB level, if you expect an IF/JOIN condition COLUMN = NULL to be true if COLUMN is NULL or a simple calculation COLUMN = NULL +1 to work, you will probably be disappointed. I’m not sure about other Databases, but in ORACLE you surely would!

 

Well, we all know how complex is an SAP System. It can’t even be compared to an ORACLE FORMS based application. We have so many layers between the database and the user, so we cannot really apply some of the workarounds or tricks as we could with much simpler systems.

 

So, whenever a BW Developer wants to add a field to an InfoProvider that has too many records and run into long runtimes or even ORA-1628 or ORA-30036, SAP Support used to suggest the following workaround:

 

 

If the InfoProvider ins’t used in queries and doesn’t have any dependent object:

 

  • 1- Make copy of the existing ODS (ODS1) to ODS2.
  • 2- Add new fields to ODS2.
  • 3- Load the data from ODS1 to ODS2 using DataMart (use ODS2 for further purposes).

 

If there are queries defined with ODS1 then the following solution is better:

 

  • 1- Make copy of the existing ODS (ODS1) to ODS2.
  • 2- Load data from ODS1 to ODS2 using DataMart.
  • 3- Delete Data from ODS1.
  • 4- Add new fields to ODS1.
  • 5- Reload the data from ODS2 back to ODS1 using DataMart.

 

As you can see, these workarounds works pretty much as that one I described in the ORACLE modeling example, considering this is performed in a much higher application level. The DataMart is prepared to handle the extraction of a lot of records, so we wouldn’t have the same problem as I had with the large UPDATE.

 

As you already know, yes, this runs faster than transporting the change… however, it’s far from being simple.  SAP Developers worked on a “more automatic” workaround, which is introduced in SAP Note #1287382 – Extending BW objects on Oracle with large tables.

 

Making its description short, it will work like the second workaround used in the ORACLE modeling example and also the second in the DataMart workaround. You will simply define in the table RSADMIN, the tables you want, and when the InfoProvider is transported, the new field will be created as NULLABLE and without a default value. In a more appropriate time, you will have to run the report SAP_NOT_NULL_REPAIR_ORACLE and it will automatically do the “two tables trick” for you.

 

You’ll probably be surprised about how fast this workaround is. I have seen situations in which the regular transport ran for 5 days until it failed. But when applying the workaround proposed in SAP Note #1287382, the transport was finished after 20 minutes and the report took only 1 hour and a half to finish.

 

If you need to use this SAP Note, make sure to read the SAP Note carefully.

 

 You may also check the following SAP Notes for improvements on the program SAP_NOT_NULL_REPAIR_ORACLE

#1340922 – Follow-on to Note 1287382.

#1383233 – SAP_NOT_NULL_REPAIR_ORACLE generates invalid SQL statement.

#1425175 – SAP_NOT_NULL_REPAIR_ORACLE.

 

Good news about this is that you won’t face this issue anymore at ORACLE 11, as NOT NULLDEFAULT has been reformulated.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply