Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
irfan_khan
Active Participant
0 Kudos

Recently, we polled some of our customers about their estimated costs to manage a single terabyte of data in a year. Although there was no consensus—the numbers ranged from $25,000 to $100,000 per annum—there was a grudging acceptance that these costs are inevitable. After all, you still need the storage hardware, network connectivity, power, and more as well as the nominal labor to keep the data reliably available.

Plus, more people inside companies have more requirements to use more data more often for their jobs. This not only adds to the massive amounts of data being retained, it puts pressure on IT departments to have more of that information fingertip-ready for users. In addition, virtually all enterprises are hanging onto their data for longer periods. Heightened governance and regulatory concerns make storing data the default policy in many organizations.

Gartner estimates large companies are seeing growth rates between 40%-60% annually. So, it doesn’t take long for those data store costs to mount quickly.

How can you manage these costs? Well, first of all, you need to have a data retirement policy. When can you safely archive and ultimately delete information not covered by regulations? Naturally, each organization will have their own requirements. And some data, even seldom accessed information, might never be removed such as CRM data on long-term customers and HR information on employees.

But other information—and there’s a lot of it—can be retired after negotiating with the information owners. And you can add leverage to the discussions by making certain that the data owners also own the costs of the data store over time. Nothing like a line item on their budget to send department heads to the delete key.

Yet, with hot data inside a database, storage costs can often be overlooked. That’s because the key metric for a production database is generally performance. Putting any constraints on performance might jeopardize a service-level agreement, so it’s a step seldom taken lightly.

With the release of Sybase Adaptive Server Enterprise 15.7, in-database compression is now an option. Needless to say, rates will vary, but we’ve seen between 40%-80% compression ratios, certainly enough to make a notable impact on storage costs. 

What’s more important, I think, is that performance remains the top priority for ASE users, so the new compression capabilities does not put your SLA at risk. ASE can compress a single row so as to eliminate empty spaces in fixed-length columns; page directory and page index compression can be applied at the page or block level; and large objects (LOB) are compressed in-database. The overall effect is to actually reduce I/O time. Backup performance is also greatly improved as a result.

There’s more information here in this white paper. If you have data storage cost problems—and you do—it’s worth a moment of your time.