Skip to Content

As a  customer and technical lead I was worrying about the rapid storage growth with our organic growth. Every time whenever there is a need to add new storage I was first thinking about “how to control and stop this as a long time approach”. SAP has it’s own design of saving data at database level and no matter of database type that the customer use.

Now a days database vendors are coming with many options where we can save the database in compressed mode to save $s on storage. In this rapid growing technological world people says storage is cheaper than anything, yes it is. But it has it’s limitations as well where high storage always need high resources to run.

We pay big number of $s for our storage (may be customer specific, not for all), this might be becasue of low maintanance cost on other things (Yes we pay more for storage than other comparable ones).

SQL 2008 R2 – A new-gen technology with matured data store procedure delivered 86.5% of compression ratio on our each BW instance (It was a BIG saving for us)

Oracle 11gR2 – Advanced Compression Option, delivered 66.6% of compression ratio (3X) on our SAP R/3 (4.7) instance (Again a BIG saving)

It’s a matured data store procedure, saves data in a magical way by creating local symbol tables where duplicates get disappears. Really an innovative thinking πŸ™‚

Resulting in high savings on storage/cost and performance improvement becuase of reduced database size.

Data store mechanism:

Capture.PNG

82% of compression on top 10 tables:

Capture1.PNG

ACO implemetation saved storage of 7+ TB in our landscape (P,Q and DR systems), it reduced the total database size from 2.7TB to 0.9TB where overall performance improved by 10%.

Now a days most of the Oracle customers are investing on ACO for immediate ROI (ACO need seperate license if you have your database license from Oracle).

For more information:

1289494 – FAQ: Oracle compression
1109743-Use of Index Key Compression for Oracle Databases
1436352 – Oracle 11g Advanced Compression for SAP Systems
1431296 – LOB conversion and table compression with BRSPACE

Enjoy

πŸ™‚

To report this post you need to login first.

7 Comments

You must be Logged on to comment or reply to a post.

  1. Daljit Boparai

    Hi Nick,

    We have test compression of ECC database and can help with shrinking database by atleast 50%. But we backup to EMC data domain , there are suggestions that compression will hit dedup ratio adversely. Any info/experience about impact on dedup ratios ??

    (0) 
    1. Volker Borowski

      We had the same discussion with our netbackup guys.

      Nevertheless when using ACO switching to RMAN compression will even reduce the backup size again by roughly 30-50%. Even of th ealready ACO compressed database.

      And still then dedup can do something for you.

      You might like to restrain from multiplexing, because mixing different files each day might generate data that is barely deduplicatable. But in a daily single stream backup of PSAPSID700, I’d assume 95% would be identical data each day, even with rman compression, so even then dedup should work at a certain level.

      Volker

      (0) 
      1. Stefan Koehler

        Hi Volker,

        > RMAN compression will even reduce the backup size again by roughly 30-50%. Even of the already ACO compressed database.


        Yes for sure as RMAN compression works completely different than ACO (OLTP compression). The other and more important (in my opinion) question is why should you backup the whole database all the time and use de-duplication mechanism on backup layer, if you are able to use incremental backups. Incremental backup (with block change tracking) is much faster (in most cases for large environments), less resource intensive (on database side) and does not transfer all the data by LAN (or LANFree) before de-duplication.

        Regards

        Stefan

        (0) 
        1. Volker Borowski

          Hi Stefan,

          nevertheless, even if you use incrementals, you will be going to store several versions of full backups, not just a single one, so dedup can save space for those full versions as well, no matter how long these are timely appart. But of course, you should be careful that your deduplicated data is on very safe storage πŸ™‚

          I mean, see the old recommendations (take a daily full backup at the end of each business day, … keep a backup cycle of 30 versions, …) there are several ways to do better than this with modern technology, and depending on what you like to achieve, you will be able to either drill back a whole lot more than 30 days when using the same storage, or you will use a whole lot less space when 30 days are sufficient even without losing all too much of recovery time and version safetiness.

          Most of us will have something in between that, depending on system requirements.

          Volker

          (0) 
      2. Daljit Boparai

        We use brtools to backup oracle to data domain (no rman). As per data domain , rman compression is not advised however there is no recommendation for ACO. Vendor cannot provide any info on impact to ACO on dedup. So now we are going to start compressing in phases and see the impact.

        thanks,

        Daljit

        (0) 
  2. Mark FΓΆrster

    Hello Nick,

    what about performance? Disk IO is the current bottleneck, which is also the justification for SAP HANA in the first place. If you compress data, that won’t help your random IO requests which dominate the ECC and maybe even the BW system. So if you eliminate disks, you reduce the disk IO capabilities. A compression of 86.5% would also allow flash storage to avoid this issue.

    Regards,

    Mark

    (0) 
    1. Stefan Koehler

      Hi Mark,

      > If you compress data, that won’t help your random IO requests which dominate the ECC and maybe even the BW system

      It depends as always. πŸ˜‰

      You will have much more efficient index range scans (= clustering factor), if you can store more (ordered) data in one block. So even in OLTP environments you can have corresponding benefits of OLTP compression for “random reads” in various cases (just think about NL batching or TBL prefetching).

      It also can hide “bad implementations” and its nasty side effects to a specific point, but this should not be the primary focus of course 😈

      Regards

      Stefan

      (0) 

Leave a Reply