Skip to Content

Recently, I had a customer ask about why some of their data loads are failing with out of space conditions.  As they put it “We use sp_iqdbspace but its very inaccurate. It will report 97% but processes will fail with an out of space condition.”

 

Just because the server shows that there is space available doesn’t mean that you can actually use it.  I know, sounds odd, and wrong, but that’s the design.  Every database and most filesystems do this to a certain degree.

 

For SAP ASE (formerly Sybase Adaptive Server Enterprise or ASE), we allocate space on extents.  An extent can only be used for one table.  If that extent is not full, and there are no extents left, the database would show some free space, but there isn’t free space to write to unless that partially used extent is for the same table.

 

Even filesystems do this.  Assume that a filesystem allocates space in 4k chunks (some are larger, some are smaller).  I grab space to write my 1 byte file to.  It allocates 1 byte out of the 4k.  The remaining 4095 bytes cannot be used by anyone except that file.  Again, free space but you can’t use it.

 

With IQ, we do something similar.  When data is in memory it is on a page (128k by default).  When we need to put that page on disk, it is run through a compression algorithm.  This algorithm outputs a series of blocks (a block is 1/16th of a page, always).  Depending on the data or index structure on the page, we may compress as small as 1 block or not see much compression and need 15 or 16 blocks.  In short, IQ always writes a variable amount to disk.  That amount varies based on the block size.  It is variable because data sets do not always compress at the same rate.  Rather than always writing a full page to disk, 128k, we would rather write smaller units in variable sizes for a more balanced and optimized storage layout.

 

For storage optimization, IQ also keeps a list of all contiguous blocks on disk from 1 to 16 (called block holes).  1 to 16 because that’s the range that a page can be compressed down to.  So, we have a list of 16 block holes, 15 block holes, 14 block holes, and so on all the way down to disk segments where there is 1 block free.  Add to this, IQ will never split a page across block holes.  If we need to write 5 blocks, we look for a 5 block, or larger region.  We cannot write 3 blocks to one place and 2 blocks to another.  If we only find a larger block hole, say 6 when we only need 5, we will convert the 6 block hole to a 5 and 1, then use the 5 block hole for the data and be left with one more 1 block hole.

 

When the compression algorithm outputs the compressed storage, it is always done in block sizes.  Data may compress down to 1k, but with a default block size of 8k, we look for a single block of 8k on disk.  Or perhaps the data compresses to 33k, we would need 5 blocks of 8k each.  4 blocks would be full (32k) plus another block where there is just 1k in use.  As with the above examples, the remaining 7k is unusable by any object other than the one that owns the page and blocks.

 

Now, as mentioned, we have an optimized storage manager to tracks all contiguous block sizes on disk form 1 to 16.  We track both used and holes or unused.  I’ll use an extreme example to illustrate the issue you re likely facing.  Suppose that IQ doesn’t have any 16 block contiguous storage regions.  So the list for block sizes 1 to 15 have areas on disk.  After compression of a page, it cannot be compressed at all so it needs 16 blocks on disk.  Looking at the 16 block map, though, there are no contiguous regions of 16 blocks, so we cannot write to disk.  The database is “full”.  Remember, we cannot split the 16 block page into smaller units (like 8+8).  We can only write to disk where there are 16 contiguous blocks, but that doesn’t exist, so we have an out of space condition.  Another load happening on different data may compress very well and only need chunks of 2-4 blocks.  Since those do exist, the load continues without error.  The database is not full for that load.

 

This is why we don’t want IQ to run close to 100% full.  At runtime, the DBA doesn’t know what the segment map looks like and could get an out of space condition, not because there is no more space, but because there are no more holes of the size you need.  I usually recommend that customers don’t run IQ higher than 90-95% full.  In heavily active systems where a large volume of data is changing constantly, we recommend running IQ at 80-85% full.

Of course this will vary because the “free” % is different in a 5TB database vs a 500TB database.  The net is to not run so close to the edge so that you can avoid these issues.

 

Quite a lot of the time, the next question is something along the lines of how do I find this out?  We provide tools to do this.

 

To capture the holes and block/page in use, you simply need to run sp_iqcheckdb in allocation mode:

sp_iqcheckdb ‘allocation database’

 

This will output data that looks like this (abbreviated):

I. 07/12 14:46:48. 0000000003 DBCC Allocation Mode Report:
I. 07/12 14:46:48. 0000000003 ** DBCC Status: Errors Detected ******
I. 07/12 14:46:48. 0000000003 :
I. 07/12 14:46:48. 0000000003 Allocation Summary:
I. 07/12 14:46:48. 0000000003 Blocks Total: 300941768
I. 07/12 14:46:48. 0000000003 Blocks in Current Version: 12284160
I. 07/12 14:46:48. 0000000003 Blocks in All Versions: 12284160
I. 07/12 14:46:48. 0000000003 Blocks in Use: 12284198
I. 07/12 14:46:48. 0000000003 % Blocks in Use: 4
I. 07/12 14:46:48. 0000000003 ** Block Count Mismatch: -38 ******
I. 07/12 14:46:48. 0000000003 ** Blocks Leaked: 38 ******
I. 07/12 14:46:48. 0000000003 :
I. 07/12 14:46:48. 0000000003 Allocation Statistics:
I. 07/12 14:46:48. 0000000003 Marked Logical Blocks: 32359488
I. 07/12 14:46:48. 0000000003 Marked Physical Blocks: 12284160
I. 07/12 14:46:48. 0000000003 Marked Pages: 2022468
I. 07/12 14:46:48. 0000000003 Blocks in Freelist: 167377502
I. 07/12 14:46:48. 0000000003 Imaginary Blocks: 21211448
I. 07/12 14:46:48. 0000000003 Highest PBN in Use: 434507057
I. 07/12 14:46:48. 0000000003 ** 1st Unowned PBN: 4985110 ******
I. 07/12 14:46:48. 0000000003 Total Free Blocks: 288657572
I. 07/12 14:46:48. 0000000003 Usable Free Blocks: 259966579
I. 07/12 14:46:48. 0000000003 % Total Space Fragmented: 9
I. 07/12 14:46:48. 0000000003 % Free Space Fragmented: 9
I. 07/12 14:46:48. 0000000003 Max Blocks Per Page: 16
I. 07/12 14:46:48. 0000000003 1 Block Page Count: 30372
I. 07/12 14:46:48. 0000000003 2 Block Page Count: 3116
I. 07/12 14:46:48. 0000000003 3 Block Page Count: 352521
I. 07/12 14:46:48. 0000000003 4 Block Page Count: 674088
I. 07/12 14:46:48. 0000000003 5 Block Page Count: 310533
I. 07/12 14:46:48. 0000000003 6 Block Page Count: 64841
I. 07/12 14:46:48. 0000000003 7 Block Page Count: 78647
I. 07/12 14:46:48. 0000000003 8 Block Page Count: 136946
I. 07/12 14:46:48. 0000000003 9 Block Page Count: 76816
I. 07/12 14:46:48. 0000000003 10 Block Page Count: 57571
I. 07/12 14:46:48. 0000000003 11 Block Page Count: 47173
I. 07/12 14:46:48. 0000000003 12 Block Page Count: 22976
I. 07/12 14:46:48. 0000000003 13 Block Page Count: 39040
I. 07/12 14:46:48. 0000000003 14 Block Page Count: 55057
I. 07/12 14:46:48. 0000000003 15 Block Page Count: 18087
I. 07/12 14:46:48. 0000000003 16 Block Page Count: 54684
I. 07/12 14:46:48. 0000000003 1 Block Hole Count: 496
I. 07/12 14:46:48. 0000000003 2 Block Hole Count: 588
I. 07/12 14:46:48. 0000000003 3 Block Hole Count: 22
I. 07/12 14:46:48. 0000000003 4 Block Hole Count: 11
I. 07/12 14:46:48. 0000000003 5 Block Hole Count: 14
I. 07/12 14:46:48. 0000000003 6 Block Hole Count: 3
I. 07/12 14:46:48. 0000000003 7 Block Hole Count: 4
I. 07/12 14:46:48. 0000000003 8 Block Hole Count: 5
I. 07/12 14:46:48. 0000000003 9 Block Hole Count: 7
I. 07/12 14:46:48. 0000000003 10 Block Hole Count: 7
I. 07/12 14:46:48. 0000000003 11 Block Hole Count: 2
I. 07/12 14:46:48. 0000000003 12 Block Hole Count: 1
I. 07/12 14:46:48. 0000000003 14 Block Hole Count: 2
I. 07/12 14:46:48. 0000000003 15 Block Hole Count: 1
I. 07/12 14:46:48. 0000000003 16 Block Hole Count: 18040964
I. 07/12 14:46:48. 0000000003 :
I. 07/12 14:46:48. 0000000003 Partition Summary:
I. 07/12 14:46:48. 0000000003 Connection Statistics:
I. 07/12 14:46:48. 0000000003 Sort Records: 11864607
I. 07/12 14:46:48. 0000000003 Sort Sets: 18
I. 07/12 14:46:48. 0000000003 :
I. 07/12 14:46:48. 0000000003 :
I. 07/12 14:46:48. 0000000003 DBCC Info:
I. 07/12 14:46:48. 0000000003 DBCC Work units Dispatched: 6202
I. 07/12 14:46:48. 0000000003 DBCC Work units Completed: 6202
I. 07/12 14:46:48. 0000000003 DBCC Buffer Quota: 40888
I. 07/12 14:46:48. 0000000003 DBCC Per-Thread Buffer Quota: 2555
I. 07/12 14:46:48. 0000000003 Max Blockmap ID found: 127313
I. 07/12 14:46:48. 0000000003 Max Transaction ID found: 372236

 

In looking at the output above, focus on the text in red and green.  The red text, “Block Page Count”, is a count of pages per contiguous block size.  For instance, there are 30,372 pages that are compressed into a single block in the database.  There are also 54,684 pages that are compressed into 16 block segments.

 

The green text, “Block Hole Count”, is a count of segments or holes of a certain block size.  This is a very new or lightly used database.  We can tell this because most of the holes are 16 blocks in size.  When a database is created, the file is split into 16 block chunks.  This database has just 496 1 block holes but 18,040,964 16 block holes.

 

As mentioned previously, when a write needs a certain hole size, if that doesn’t exist, we will use a larger hole and split it up.  For instance, if a write needed an 8 block hole, and it didn’t exist, we would likely take a 16 block hole and create two 8 block holes from it.  If no 16 block holes existed, then we would split a 15 block hole.  We continue looking at smaller hole sizes until we have a free hole or we hit the hole size that we are looking for.

 

I wanted to note, too, that this applies to all types of storage in SAP IQ (main, system main, and temporary storage types).  With temporary storage, though, it is not easy to see the hole mappings because they are reset after every SAP IQ restart.

 

My hope is that this helps clear up some of the mystery with respect to how SAP IQ uses storage and tries to optimize that use.

To report this post you need to login first.

2 Comments

You must be Logged on to comment or reply to a post.

    1. Mark Mumy Post author

      I am not so sure.  During that process, we will be writing to newer areas on disk.  If the old, pre rebuild indexes are close to smaller “holes”, then we can see some optimizations.  It would be very random and not guaranteed.  There are other reasons to rebuild HGs, like space and structure optimizations, rebuilding of the group array (G-array) into a more optimized space, etc.

      You only need to rebuild an FP if you know that the cardinality has been reduced.  When you rebuild make sure to specify the new count so that the n-bit will be rebuilt into a smaller structure.  If it cannot be shrunk in size (number of bits to represent uniqueness) then there is not much sense in rebuilding.

       

      (0) 

Leave a Reply