Skip to Content

Introduction

Have you ever extended a SAP standard table with custom appends or includes and never thought about its performance impact (except data growth)? Then this blog should be worth to read. I have seen a bunch of such implementations after working several years in the SAP and Oracle performance area and most of the time it was not well known, that extending an Oracle table (to a specific limit) can have nasty side effects.

This blog will focus on the “Row Chaining” effect, that always occur from a specific implementation point and a short side trip to some used data types, which can increase this effect as well. I will not discuss the “Row Migration” effect, which can be mixed up very easily with “Row Chaining”, but this can be solved by adjusting PCTFREE accordingly and reorganizing the table once – so not a real big deal. “Row Chaining” is much more difficult to avoid.

Data type usage

This blog section should be a short side trip only. You are mostly able to specify the column data type, if you add custom appends or includes to an existing SAP (standard) table. Let’s just shortly verify two possible Oracle data types and the needed space for storing this data.

I have no idea why SAP uses the data type VARCHAR2 for numeric values only (like MATNR for example) and pads the content with zeros or does not make usage of NULL values (instead of using a space character as default value). Maybe this implementation is due to historical or compatibility reasons or whatever – i have no clue.


Let’s verify the effect with VSIZE for the space usage in data blocks with a numeric only column like MATNR and the default used data type VARCHAR2(54).

SQL> create table VSIZET (MATNR_VARCHAR VARCHAR2(54),
                                                  MATNR_NUMBER_LZERO NUMBER,
                                                  MATNR_NUMBER NUMBER);
SQL> insert into VSIZET values ('000000002403870025',000000002403870025, 2403870025);
SQL> select vsize(MATNR_VARCHAR),
                       vsize(MATNR_NUMBER_LZERO),
                       vsize(MATNR_NUMBER)
           from VSIZET;
VSIZE(MATNR_VARCHAR) VSIZE(MATNR_NUMBER_LZERO) VSIZE(MATNR_NUMBER)
-------------------- ------------------------- -------------------
                      18                                   6                               6

The VARCHAR2 data type needs 3 times more space for storing the same information than a number data type (without leading zeros). We should go for this approach as we only need to store numeric values in column MATNR. So be careful by choosing the right ABAP DDIC data type for your data (and the referring data type on the Oracle database). This is not always possible (e.g. if you need to join data from an existing SAP table on the Oracle database layer, that already got a VARHCAR2 data type), but think about it if the custom information is independent. The wrong data type can increase the amount of needed space and in consequence migrated or chained rows.

Row Chaining

Let’s check the official Oracle documentation about “Row Chaining” first:

Oracle Database must manage rows that are too large to fit into a single block. The following situations are possible:

    • The row is too large to fit into one data block when it is first inserted.

In row chaining, Oracle Database stores the data for the row in a chain of one or more data blocks reserved for the segment. Row chaining most often occurs with large rows. Examples include rows that contain a column of data type LONG or LONG RAW, a VARCHAR2(4000) column in a 2 KB block, or a row with a huge number of columns. Row chaining in these cases is unavoidable.

    • A row has more than 255 columns.

Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks.

When a row is chained or migrated, the I/O needed to retrieve the data increases. This situation results because Oracle Database must scan multiple blocks to retrieve the information for the row. For example, if the database performs one I/O to read an index and one I/O to read a nonmigrated table row, then an additional I/O is required to obtain the data for a migrated row.

The first root cause (“The row is too large to fit into one data block when it is first inserted”) of row chaining slightly appears in SAP environments as LONG RAW columns are not recommended anymore. However it can also happen with LOBs, if the stored data is less than 4000 bytes and IN-Line LOBs are enabled or with a lot of columns that need the corresponding space.

The second root cause (“A row has more than 255 columns”) is “much more applicable”, if SAP standard tables are extended with custom columns for additional business logics. Tables like VBAP or LIPS with 300 or more columns are very common (by custom extensions) in my experience.

Such implementations have two nasty side effects, that intensify each other as we are talking about additional I/Os.

  1. Row Chaining occurs by every stored row
  2. Oracle Advanced Compression can not be used on tables with more than 255 columns (SAPnote #1431296)

Fast growing and frequently accessed SAP core tables can not be compressed (to reduce logical and physical I/O) and have chained rows that force additional I/Os in consequence. Now just think about a table like VBAP, that is queried in high frequency by the SAP standard business logic and you have extended such table over this limit. You may notice the performance impact more and more (even if you are accessing VBAP by primary key most of the time) by every SELECT or DML that needs columns from both “row pieces” as your system keeps growing.

Here is a graphic about a chained row, before we go on with a short demonstration of its effect on logical and physical I/Os.

/wp-content/uploads/2013/02/row_chaining_185381.gif

In this case there was not enough space to store the second row piece (e.g. more than 255 columns) in the same block and so it needs to be stored in another block. All row pieces of the next inserted row could be stored in the free space of the right block (regarding this graphic). It is called “intra-block row chaining”, if all row pieces are stored in the same block. Chained rows can be caused by INSERTs or UPDATEs (and migrated at the same time) on the other hand migrated rows are caused by UPDATEs only.

Demonstration

The following demo was done on an Oracle 11.2.0.3 database on OEL 6.2.

I will create two tables (one with 255 and one with 256 columns) with an unique index for demonstrating the “row chaining” effect. Several rows are populated after the tables are created.

SQL> create table TAB_255 ( C1 VARCHAR2(10), C2 VARCHAR2(10) DEFAULT 'AAA', ... , C255 VARCHAR2(10) DEFAULT 'AAA');
SQL> create unique index TAB_255_I on TAB_255(C1);
SQL> create table TAB_256 ( C1 VARCHAR2(10), C2 VARCHAR2(10) DEFAULT 'AAA', ... , C256 VARCHAR2(10) DEFAULT 'AAA');
SQL> create unique index TAB_256_I on TAB_256(C1);
SQL> begin
for i in 1..1000 loop
    insert into TAB_255(C1) values(i);
    insert into TAB_256(C1) values(i);
end loop;
commit;
end;
/
SQL> exec dbms_stats.gather_table_stats(NULL, 'TAB_255');
SQL> exec dbms_stats.gather_table_stats(NULL, 'TAB_256');

SQL> select /*+ gather_plan_statistics */ * from TAB_255 where C1 = '200';

TAB_255_un.png

SQL> select /*+ gather_plan_statistics */ * from TAB_256 where C1 = '200';

TAB_256_un.png

We performed the same amount of physical I/Os right here (with an empty buffer cache), but one more logical I/O was needed for the table with 256 columns. This additional I/O is caused by intra-block row chaining. In such a case we don’t need to read an additional physical block, because of all row pieces are stored in the same data block.

Let’s check a block dump of table TAB_256 for verification:

tab 0, row 0, @0x1b71
tl: 1023 fb: -----L-- lb: 0x1  cc: 255
col  0: [ 3]  41 41 41
....
col 253: [ 3]  41 41 41
col 254: [ 3]  41 41 41
tab 0, row 1, @0x1b64
tl: 13 fb: --H-F--- lb: 0x1  cc: 1
nrid:  0x0041a7a5.0
col  0: [ 3]  39 38 35

It is verified that every row needs two row pieces. One row piece with just one column (C1 in our case) and one row piece with the other 255 columns.

Now you maybe wonder “just one more logical I/O – why should i worry about this?”, but just think about index ranges scans, that extract more rowids from one or more leaf blocks and the corresponding table access with logical I/Os only (in best cases) or additional physical I/Os (in worst cases).

Unfortunately intra-block chaining is not tracked by the “table fetch continued row” statistic in my previous test scenario, but you can use the following query in your productive environment. Chained rows (over several blocks) should occur over the time due to archiving and usual data processing in such scenarios.

SQL> select s.name, m.value from v$statname s, v$mystat m
            where s.statistic# = m.statistic#
            and s.name in ('table fetch continued row', 'table scan rows gotten', 'table fetch by rowid')
            order by s.name;

However we can check the extra work, that needs to be performed by a full table scan on TAB_256 in this test scenario.

SQL> select s.name, m.value from v$statname s, v$mystat m
            where s.statistic# = m.statistic#
            and s.name in ('table fetch continued row', 'table scan rows gotten', 'table fetch by rowid')
            order by s.name;
NAME                                                                          VALUE
---------------------------------------------------------------- ----------
table fetch by rowid                                                     0
table fetch continued row                                           0
table scan rows gotten                                                72
SQL> select * from TAB_256;
...
1000 rows selected.
SQL> select s.name, m.value from v$statname s, v$mystat m
            where s.statistic# = m.statistic#
            and s.name in ('table fetch continued row', 'table scan rows gotten', 'table fetch by rowid')
            order by s.name;
NAME                                                                          VALUE
---------------------------------------------------------------- ----------
table fetch by rowid                                                      0
table fetch continued row                                           166
table scan rows gotten                                                1072

The counter “table fetch continued row” is not increased and still zero, if you execute the same procedure on table TAB_255. But what does this statistics mean in our example here? The table itself only got 256 blocks allocated (and the HWM was at 168 blocks), but we needed 166 extra data block accesses (logical or physical I/O) for finding the individual pieces by fetching 1000 rows from table TAB_256.

Summary

Be careful, if you want to extend a SAP table that already have a lot of columns (nearly 255, 510, …). Adding custom columns to such tables can have an impact on the performance by increasing logical or physical I/Os. Be also aware, that you can not use Oracle Advanced Compression (on tables with more than 255 columns) anymore, if you plan to extend a SAP table and hit the 255 column limit. These both effects can increase each other in large SAP systems.

Sometimes it is better to create a custom “Z*” table and join it to the SAP standard table(s), if you need to implement some custom business logic, that is not used as much as the standard logic. Running a little bit more logical or physical I/Os in just a few cases is even better as in general.

If you have any further questions – please feel free to ask or get in contact directly, if you need assistance by troubleshooting Oracle database performance issues.

References

To report this post you need to login first.

2 Comments

You must be Logged on to comment or reply to a post.

  1. Volker Borowski

    Interesting blog.

    While more and more Oracle DBs use Compression,

    we will face more and more of these chained rows.

    They are difficult to track, because dbms_stats does not populate

    the chain count column in DBA_TABLES any more.

    Nevertheless in the lower left of 6.40 ST04 or ST04old (on newer systems)

    you have a “fetch by continued row” value which is likely to be higher

    than in the days before compression.

    Meanwhile I have seen this ratio been increasing to very high ratios,

    esp. on systems that simply got the overall compression according to

    the related SAP note. Sometimes the ratio is even up to 1:10.

    I guess this will be the “next” dba job.

    Hunting for the tables that need to be uncompressed. ๐Ÿ™‚

    Volker

    (0) 
    1. Stefan Koehler Post author

      Hi Volker,

      thanks for your feedback.

      > Nevertheless in the lower left of 6.40 ST04 or ST04old (on newer systems)you have a “fetch by continued row”

      This metric is also published in DBACOCKPIT, however the problem is that this is a system wide metric and not object related.

      > Meanwhile I have seen this ratio been increasing to very high ratios, esp. on systems that simply got the overall compression according to the related SAP note …. Hunting for the tables that need to be uncompressed.

      Well it depends (as always) on the following points.

      • Is it really a chained or migrated row? (e.g. can be verified by a FTS on the corresponding object as the “table fetch continued row” statistic is not counted on migrated rows)
      • Secure File Lobs with compression are used as well, if you are talking about “overall compression” according SAPnotes and in this case the In-Line LOBs are in the game. In my experience most of this LOB data is stored in-line (4000 bytes) after compression (e.g. TST03). In this situation you are going to hit the first root cause of row chaining. However the important question is: What is better / less work? A chained row that causes 2 or 3 additional I/Os or a “LOB locator -> LOB Index -> LOB segment” access.

      Regards

      Stefan

      (0) 

Leave a Reply