Skip to Content
Author's profile photo Bala Prabahar

Oracle’s Compress for OLTP – How to implement?


As you are probably aware, SAP has released several notes providing details on Oracle’s feature, compression. As compression has been available since Oracle’s 9i, a few of SAP’s notes are old. In the past(until Oracle 10g), SAP provided a process to identify a list of good candidates(tables) for the compression. 10g’s compression didn’t work for all statements. 10g’s compression is/was not very popular due to the limitations imposed by old compression. Oracle calls this compression as Basic Compression which is still available in 11g. In addition, Oracle 11g introduced Advanced Compression. This feature would require additional license if you purchased Oracle Enterprise Edition from Oracle Corporation; if you purchased it from SAP, no additional license is required.

There are probably hundreds of different ways to implement this feature. I will explain what I followed. This procedure is still evolving so I believe there is certainly a lot of room for improvement.

h5. Assumptions

1) Free disk space is limited.

2) You would like to test this feature in your environment to understand the benefits quickly.

3) Tables are partitioned. 

4) You are not familiar with the latest version of BRtools which apparently support reorganization with Compress for OLTP feature. 

5) You have created Oracle DIRECTORY to be used by expdp and impdp utilities. 

h5. Procedure

Step 1 Rebuild indexes with COMPRESS option for tables with more than 100m records (SAP’s recommendation per note: 1436352)

Since free disk space is limited in my environment, I initially focused on finding ways to free up space. SAP note recommends compressing indexes for tables with more than 100m records. Rebuilding indexes with compress option not only would release disk space but also is a very low risk. 

I rebuilt indexes for two DSO tables in BW sandbox and freed up 45GB. This also helped understand the benefit of compression without spending a lot of time. 


Step 2  Rebuild large tables with Compress for OLTP option

Next we identified a few large tables in BW and R/3 system. Following is the list of sub-tasks I performed to rebuild the table with “Compress for OLTP” option. Before you perform this step, make sure you bring down SAP leaving only database open.


      a. CREATE TABLE … AS SELECT   (NOTE: If you would like to skip expdp and impdp, you could use partition clause in CTAS statement. Use step (e) to generate “CREATE TABLE” clause for CTAS statement)      b. EXPDP (Data Pump Utility)

          c. EXP (export just schema for DDL statement to create table)      d. IMP (import for schema using INDEXFILE option)      e. Edit INDEXFILE created by IMPORT utility

      1. Remove REM, change the table name to match the table name created by 1st step, CREATE TABLE …. AS SELECT. Don’t forget to add “COMPRESS FOR OLTP”.
      2. Write all INDEX statements to another file so indexes can be created after importing data
      3. Create one file containing just CREATE TABLE statement and another containing all indexes. Add COMPRESS option to all indexes. Also add PARALLEL option with 10 (or whatever you are comfortable with).
      4. WARNING: Make sure Initial Extent Size for each partition is small; otherwise new compressed table may be taking up more space than uncompressed table.

          f. Prepare the database for IMPDP      g      i. Note: Data Size (before compression) was calculated after reorging the table uncompressed. In other words, same SQL statement – with small FIRST and NEXT extent sizes-was used to create both “before and after” compression tables.


    Data Size in GB (Before compression) Data size in GB(After compression) Index Size in GB(Before compression) Index Size in GB(After compression) Comments
    Table 1 (BW/DSO) 44 6 67 55  
    Table 2 (BW/Change log) 54 8 8 8  
    Table 3 (BW/DSO) 42 8 66 33  
    Table 4 (R/3) 73 14 72 53  
    Total 213


    213 149 Compression for OLTP provides really an impressive disk space savings.



    Assigned Tags

        You must be Logged on to comment or reply to a post.
        Author's profile photo Stephen Sun
        Stephen Sun
        Good, but this article used export-recreate-import approach.

        Actually even without compression, this could already eliminate a lot of embedded fragementations.

        Author's profile photo Bala Prabahar
        Bala Prabahar
        Blog Post Author
        Good point. I used "export-import(reorg)-export-recreate_compress-import" approach so embedded fragmentations were already eliminated by first "exp/imp" step.

        At the very bottom(just above table providing stats), I included that information:
        "Note: Data Size (before compression) was calculated after reorging the table uncompressed. In other words, same SQL statement - with small FIRST and NEXT extent sizes-was used to create both "before and after" compression tables. "