Skip to Content

Sometimes support messages have go a rather long way to finally get a solution.

The following problem came up in a support message recently and since neither the analysis steps nor the solution  were obvious, I decided to make a blog out of it.

Ok, so the problem statement was something like this:

Hello support,
we’re using SAP BI and recently re-initialized some of our datasources.
From then on, our Oracle database writes out Archivelogs at a rate 4 times higher than before.
What’s causing this and how can we stop it?

Hmm… this does sound a bit odd!
Why should performing an initial load to the BI lead to a permanently raised level of DB change-logging?

So the first step for this message was of course to verify the observation of the customer.

Nifty tools at hand

A very nice way to do that is to use the scripts from the RSORASTT support script framework (see note #1299493 -\     “Loadable SAP Support Monitors in DBA Cockpit” for details on this).

With one of those scripts the following matrix was produced:

----------------------------------------------------------------------------------------------------------------------------------
  DAY|WD |00  |01  |02  |03  |04  |05  |06  |07  |08  |09  |10  |11  |12  |13  |14  |15  |16  |17  |18  |19  |20  |21  |22  |23  |
----------------------------------------------------------------------------------------------------------------------------------
06-08|TUE|   2|  12|  44|    |   1|    |    |  45|    |    |  45|    |    |    |    |    |    |    |    |    |    |    |    |    |
06-07|MON|  90|    |    |    |    |    |    |  43|    |    |  44|    |   1|  44|    |    |  43|    |   1|  44|    |    |   5|    |
06-06|SUN|    |    |    |    |    |    |    |    |    |  48| 116| 132|  92|  50|  42|  42|  53|  24|    |  60| 135| 130| 132| 124|

06-05|SAT|   6|   7|  45|  53| 144|  90|    |    |    |    |    |  68| 134| 123|    |    |  47| 128|  69|    |    |    |    |    |
06-04|FRI|   2|  14|    |  43|    |    |    |  40|   3|    |    |  54|    |  44|    |    |  44|    |    |  45|    |    |   1|    |
06-03|THU|   6|  11|  44|    |   1|    |    |    |    |  45|  47|    |    |  46|    |    |  44|    |    |  43|    |    |   1|    |
06-02|WED|   3|   8|  77|   8|   1|    |    |  45|    |    |  44|    |    |  43|    |   1|  45|    |   1|  45|    |    |   4|    |
06-01|TUE|   3|   6|  22|  24|   1|    |  18|  74|   7|    |  45|   1|    |  44|    |    |  44|    |    |  45|    |   1|   5|    |
05-31|MON|    |    |    |    |    |    |    |  45|    |    |  44|    |    |  43|    |    |  43|    |    |  44|    |    |   6|    |
05-30|SUN|    |    |    |    |    |    |    |    |  63|    |    |    |    |    |    |    |    |    |    |    |    |    |   1|    |

05-29|SAT|   3|   5|  28|  64| 141| 102|    |    |    |    |    |    |  97| 124| 133|  98|    |    |    |    |    |    |    |    |
05-28|FRI|   6|   6|  45|    |    |    |    |  45|    |    |  45|    |    |  44|    |    |  47|    |    |  46|    |   1|   1|    |
05-27|THU|   6|   6|  47|    |    |    |    |  47|    |    |  46|    |    |  44|    |    |  46|    |    |  46|    |    |   5|    |
05-26|WED|   2|  11|  43|   7|    |    |    |  44|    |    |  44|    |    |  44|    |   1|  45|    |    |  46|    |    |   5|    |
05-25|TUE|    |    |    |  49|  24|    |    |  44|    |    |  45|    |    |  45|    |    |  45|    |    |  45|    |    |   3|    |
05-24|MON|    |    |    |  49| 140| 137| 150|  53|    |    |    |    |    |    |    |    |    |    |    |    |    |    |   5|    |
05-23|SUN|    |    |    |  98| 146|   1|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   5|    |

05-22|SAT|   5|  11|    |  43|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |
05-21|FRI|   4|   6|    |    |    |    |    |    |    |   4|   3|  42|  50|  43|  46|  44|  45|    |    |  46|    |    |   4|    |
05-20|THU| 158| 207| 212| 194| 156|  89|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   5|    |
05-19|WED|    |    |    | 202| 190| 152|    | 142| 199| 207| 218| 209| 172| 222| 208| 210| 225| 224| 219| 217| 218| 222| 216| 219|
05-18|TUE| 152| 208|   2|    |    |    |    |  53|    |    |    |  13|    |    |    |    |    |    |    |    |    |    |   5|    |
05-17|MON|    |    |    |  73| 106|   1|   2|   1|    |   9|   8|  61|  95| 160|  78| 207| 207| 231| 197| 208| 210| 216| 210| 193|
05-16|SUN|    |    |    |  10|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   6|    |

05-15|SAT|   3|   5|    |  64| 134|    |    |    |  32| 105|    |  21|    |  17|  16|    |   7|    |    |    |    |    |   4|    |
05-14|FRI|    |    |    |  66| 108|  14|   8|   7|   8|   7|   6|   6|   7|   7|   9|   8|   9|   8|   7|   7|   7|   9|   5|    |
05-13|THU|   4|   7|    |  72| 113|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   5|    |
05-12|WED|   4|   9|   5|    |   2|    |   6|   7|   6|   6|   6|   6|   7|   8|   7|   7|   7|   7|   7|   7|   8|   8|   4|    |
05-11|TUE|   4|   6|   8|  10|    |    |  10|   8|   6|  14|   7|   8|   7|   6|   7|   8|   8|   8|   7|   7|   7|   7|   6|    |
05-10|MON|    |    |    |  67| 107|  16|   6|   7|   6|   6|   7|   8|   7|   8|   7|   9|   6|   6|   7|   9|   6|   9|   4|   1|
05-09|SUN|    |    |    |  66| 120|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   6|    |
05-08|SAT|    |    |    |    |  48|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   5|    |
----------------------------------------------------------------------------------------------------------------------------------

\

What we see here is for each day 24 columns with the number of archive logs that had been written out in that period of time.
The initial data load started on monday (17. Mai) and finished (with some interruptions) on thursday (20.5.).
Before this load we see an archive log creation rate of 6 – 9 archive logs per hour.
After the load we see something around 44 – 50 every three hours.

Not only had the total amount of redolog data increased but also the pattern in which the archive logs are written out changed.

Now, after we proved that the customers observation was totally correct, the next obvious question was:
“What causes so much database change logging?”

In fact this is one of the hard to answer questions, since no database keeps details logs about the objects, statements or users that caused logging – except in the logs itself. Actually that’s was the redolog files are there.
Unfortunately it’s not quite practical to try to read what’s in Oracle archive log files.

So one main approach is to assume that in a BI system most data change comes from loading data, a.k.a. inserting data into tables.
This would make the tables and the associated indexes larger.
So let’s look out for objects that showed an considerable increase of size in the last weeks.

By using the standard tools from DBACockpit I found the following segments to be the ones growing fastest:

Months   (Last analysis: 01.06.2010 07:05:30)
-----------------------------------------------------------------------
Name            |Partition            |Type      |Si(MB)|Chg.Size/month
-----------------------------------------------------------------------
/BIC/B0000356000|B00003560000000000086|TABLE PART| 3.162|    2.449,355
/BIC/B0000356000|B00003560000000000084|TABLE PART| 3.255|    1.241,311
RSBERRORLOG     |                     |TABLE     |29.339|      994,767
/BIC/B0000356000|B00003560000000000082|TABLE PART| 3.142|      818,478
/BIC/B0000356000|B00003560000000000080|TABLE PART| 3.110|      625,500
/BIC/B0000356000|B00003560000000000078|TABLE PART| 3.204|      483,974
RSBERRORLOG~0   |                     |INDEX     |11.203|      462,247
/BIC/B0000356000|B00003560000000000076|TABLE PART| 3.179|      399,890
RSBERRORLOG~001 |                     |INDEX     | 8.237|      339,616
/BIC/B0000356000|B00003560000000000074|TABLE PART| 3.126|      338,208
/BIC/B0000356000|B00003560000000000072|TABLE PART| 3.157|      292,963
/BIC/B0000356000|B00003560000000000070|TABLE PART| 3.127|      261,868
/BIC/B0000267000|B00002670000000000002|TABLE PART| 8.195|      260,137
/BIC/B0000356000|B00003560000000000068|TABLE PART| 3.008|      224,211
/BIC/B0000356000|B00003560000000000066|TABLE PART| 3.058|      206,328
/BIC/B0000356000|B00003560000000000064|TABLE PART| 3.008|      186,740
/BIC/B0000356000|B00003560000000000062|TABLE PART| 3.072|      185,425
/BIC/B0000356000|B00003560000000000060|TABLE PART| 2.304|      181,479
/BIC/B0000356000|B00003560000000000057|TABLE PART| 2.240|      176,219
[...]

\

This boiled down to the RSBERRLOG table and a table called /BIC/B0000356000.
For those of you familiar with the table naming convention of SAP BI it’s obvious that this /BIC/B0* table is a PSA table.
For us mere mortals transaction SE11 provides this information ;-D

/BIC/B0000356000
PSA for 2LIS_41_S920 P11-100 2LIS_41_S920

The “2LIS_41_S920 P11-100 2LIS_41_S920” is a BI data source name.

A hot lead – or not?

This sounded like a hot lead.
Unfortunately we still have no direct connection between the table growth and the redolog writing.

So I decided to take another approach and to review the oracle segment statistics.

SELECT * FROM
( SELECT
     SUBSTR(OBJECT_NAME, 1, 30) SEGMENT,
     SUBSTR(STATISTIC_NAME, 1, 20) STATISTIC_NAME,
     VALUE
   FROM V$SEGMENT_STATISTICS
   WHERE STATISTIC_NAME IN ('db block changes', 'physical writes')
   ORDER BY VALUE DESC )
WHERE ROWNUM <=10;

Result of the SELECT statement
-------------------------------------------------------------------
|SEGMENT                       |STATISTIC_NAME  |            VALUE|
-------------------------------------------------------------------
|/BIC/LZLMI02~0                |db block changes|   1.651.973.280 | <<<
|/BIC/LZLMI02                  |db block changes|   1.095.154.352 | <<<
|TESTDATRNRPART0               |db block changes|     107.267.472 |
|/BIC/LZLMI02~0                |physical writes |      45.216.909 |
|TESTDATRNRPART1               |db block changes|      15.905.616 |
|/BIC/LZLMI02                  |physical writes |      14.176.353 |
|/BI0/TMATERIAL                |db block changes|      13.590.528 |
|TESTDATRNRPART0~0             |db block changes|       5.284.080 |
|SEG$                          |db block changes|       3.218.256 |
|RSBKDATA~0                    |db block changes|       2.265.696 |
|/BIC/B0000356000              |db block changes|       2.115.920 | <<<
[...]

\  \

So indeed, the table we identified earlier belongs to the Top 10 segments on which block changes and therefore logging occurs.
But table /BIC/LZLMI02 and its index /BIC/LZLMI02~0 are way up in a totally different class of that!
A whole order of magnitude larger than the largest next objects concerning the number of block changes are these two segments.

In fact, these two segments caused more block changes in that database than all other tables/indexes combined!

I thought this would be a good reason to learn something more about this table.

The first unusual thing (at least for a BI newbie like me) was the naming of the table.
I never came across a /BIC/L* table before.

NC IC? S.O.S. !

Thus I consulted by BI colleagues and got some great inter-component-intra-primary-support consulting.
This /BIC/L* table belongs to a special kind of InfoCube: a “non-cumulative” or NC Infocube (SAP BI documentation “Non-Cumulatives”).

As far as I got it, the special feature of this InfoCube is the ability to track information properly that cannot be summed up (accumulated) over time.

Like inventory in your stock.
It simply does not make sense to add up the stock of monday with that of tuesday – thus the standard summation model of InfoCubes cannot be applied for this.
Instead the NC InfoCube denotes the amount of stock leaving and entering the inventory and calculates the current stock at query runtime.

Now, it’s possible to make very specific restrictions on the validity of the data.
In short – the more attributes you use to restrict the validity, the more combinations of restrictive attributes need to be checked and updated everytime data is loaded into the NC InfoCube.
Therefore it’s highly recommended to only use as few as possible restrictions on the validity (SAP BI documentation “Validity Area”).

For this specific NC InfoCube four attributes had been chosen:

InfoObject                 Description
----------                 ----------------
0CALDAY                    Calender Day
0PLANT                     Plant
0STOR_LOC                  Storage Location
0BATCH                     Batchnumber

\

Four attributes may not look much, but it’s the number of distinct values for each of them that makes up for the huge number of possible combinations.

To get a feeling about how many combinations are in the table right now and which of those attributes is the largest contributor to that we used the old, but seldom used transaction DB05.

Arcane DB05 techniques in use…

The following is the output for a DB05 analyis for table /BIC/LZLMI02 over the mentioned fields.
Attention: this transactions does read ALL data in the analysed table and aggregates it.
So better be carefull what combination of fields you put into it!

-----------------------------------------------------------------------
|Date/time of analysis:                  10.06.2010  13:18:18         |
|Analyzed table:                         /BIC/LZLMI02                 |
|Total number of rows:                      7.667.420 rows            |
|Requested bytes per row:                          16 bytes per row   |
|Requested bytes to be buffered 100%:     122.678.720 bytes           |
|Current buffering mode:                           no buffering       |
-----------------------------------------------------------------------

------------------------------------------------------------------------
|     Rows per    | Distinct   |          1           11          101
|   generic key   | values     |       - 10        - 100      - 1.000
------------------------------------------------------------------------
|      Key fields |            |
|  1   SID_0REQUID|          2 |          0            0            0
|  2   SID_0PLANT |         14 |          2            0            0
|  3   SID_0STOR_L|        816 |         92          108          278
|  4   SID_0BATCH |  7.667.420 |  7.667.420
------------------------------------------------------------------------

------------------------------- ----------------------------------------
|     Rows per    | Distinct  |     1.001    10.001    100.001 more than
|   generic key   | values    |  - 10.000 - 100.000 -1.000.000 1.000.000
------------------------------- ----------------------------------------
|      Key fields |           |
|  1   SID_0REQUID|          2|         0         0          0         2
|  2   SID_0PLANT |         14|         0         2          8         2
|  3   SID_0STOR_L|        816|       274        50         14
|  4   SID_0BATCH |  7.667.420|
------------------------------- ----------------------------------------

\

The output needs to be explained however.

Usually you would only see one matrix with fields and distinct values, but for readability reasons I reformatted the output, so we’ve two matrixes here.

To understand this matrix let’s start in line 1.
For field SID_0REQUID there are 2 distinct values in the table and both of them are hold by more than 1 Mio. rows.
So far so obvious.
Line 2 now is NOT just about SID_0PLANT, but instead about the combination of SID_0REQUID and SID_0PLANT!
For this combination we get a total of 14 distinct combinations.
Two of them are rather seldom with just 1-10 rows carrying it and the other 12 combinations all have >= 10.000 rows.

And so on.

Insteresting is of course the last line.
SID_0BATCH makes the combination of all columns unique!
So, for every one of the 7.667.420 combinations of the four fields, we just have one row in the table.

Therefore this “batch number” is a near unique attribute and the /BIC/LZLMI02 needed to keep track of many, extremly many validity range definitions.

And remember: this is not the data to be reported from, but just the validity information.
These 7.6 Mio rows needed to be updated whenever new data was loaded into the ZLMI02 InfoCube!

If that does not explain the high archivelog creation rate, I don’t know what else would 🙂

Happy to have found the culprit, one last question remained:
What can we, respectively the customer, do against this?

As many of you know, the most efficient way to do something is to avoid doing it.
Thus, we checked whether the /BIC/LZLMI02 would actually be used by any of the queries stored in this BI.

One easy way to approach this is to check table RSRREPDIR.
It contains the directory of all queries that are build on InfoCubes:

 INFOCUBE   GENUNIID

ZLMI02     ZLMI02/Z_ZLMI02_0001
ZLMI02     ZLMI02/E_ZLMI02_0002

\

Just two queries for this InfoCube.
Via transaction RSRT (Query Monitor) it was then easy to look up, whether any of those two queries used non-cumulative values.
To display this information, all you’ve to do is to select the query and then click on the pushbutton “Performance Info”.
You’ll then get a list with a section called “Technical Information” and a paragraph labeled “Non-Cumulative Indicators” (SAP BI documentation “Technical Information”).

It turned out, that for none of the both queries the non-cumulative values, let alone the validity restrictions had actually been used at all.

Get this! I mean: get THIS!

Get this: the vast amount of database change activity was done for no use!

The recommended solution was to remove the validity restriction attributes with the help of BI report RSDG_CUBE_VALT_MODIFY as it is documented:

SAP BI documentation “Validity Area”

There’s also this overview note available on the non-cumulative data handling in SAP BI (#586163Composite Note on SAP R/3 \  Inventory Management in SAP BW).

Finally, the problem was solved by changing the validity areas.

To report this post you need to login first.

7 Comments

You must be Logged on to comment or reply to a post.

  1. David Hull
    This is a great blog, I very much enjoy reading the process you went through to investigate this. I was also not aware of this script repository, so that was interesting as well. (of course, I don’t support Oracle databases anymore, unfortunately)

    I also agreed with your comments on another blog regarding system copies for DR, (I always seem to be in agreement with your comments) but it would seem that blog has been removed, so I can’t respond there.

    Cheers,
    David.

    (0) 
    1. Lars Breddemann Post author
      Hey David,

      thanks for the positive feedback on this!
      I wasn’t too sure whether this way of presenting problem analysis know-how would work for anybody.
      But since I always like to read to solved support messages to figure out how colleagues got their solution I thought I give it a try.

      This other blog post you mentioned – yeah, seems that it had been removed.
      And I guess neither me nor Stefan Koehler, who also commented on it, are too sad about it 🙂

      So, looks like there’s growing a little group of interest here in the DB/OS corner of SDN.
      Very nice!

      Cheers,
      Lars

      (0) 
      1. David Hull
        Absolutely Lars, I think this method of presenting the problem-solving process is excellent! Almost like Sherlock Holmes describing how he solved the case.

        It is not hard to find people that can look up answers and implement the solutions. It is hard to find people that can methodically follow a troublesome process to a conclusion. Not all errors are documented in OSS notes. I hope that many will learn more from the process you went through to find the solution than the solution itself.

        And yes, I like this corner! 🙂

        Cheers,
        David.

        (0) 
  2. Jeff Forshaw
    Great blog! Internally we’ve seen an issue where we get mass amounts of change logs on our MSS SQL DBMS … wondering if you know of any tools from SAP that work for MSS SQL?
    (0) 
    1. Lars Breddemann Post author
      Hi there,

      sorry, no MS SQL expertise in this brain 🙂
      All I know is some MaxDB and Oracle.

      Actually: no idea about what MS SQL Server provides for statistics or system activity history.
      In case of such problems all I can propose is to open a support message to get the right experts right on.

      Glad you liked the blog anyhow.
      Cheers,
      Lars

      (0) 

Leave a Reply