Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
lbreddemann
Active Contributor
0 Kudos

I recently had to handle a case where a big and important system was rendered unusable by database corruptions. Due to some bad luck and faulty procedures there was no good backup available. Even worse: the corruption had been detected via transaction abort in the SAP.

This case reminded me of the importance to perform regular consistency checks on the database – AT LEAST ONCE IN THE BACKUP CYCLE!

Why is this so important? Well, as soon as there is no backup available anymore than a possibly detected corruption may be hard or impossible to solve.

Anyhow: many customers don’t do consistency checks and the main argument is, that it puts too much additional pressure to the system.

Therefore I decided to go for a quick test. What option of block consistency checks to the least harm to the system?

I tested two methods: dbverify, the old standard approach and rman validate check. The testmachine is rather untypical: my Laptop (Pentium M@1.6 Ghz, WinXP). To get realistic results for your environment just go ahead and perform the tests yourself.

With the recent version of the BRTOOLS (see note 1016173) both approaches are supported so it may be interesting to know which to choose.

The test case is build up by having one session doing some database work that is both I/O and CPU intensive (see scripts below). In a second command prompt I started the database verification and compared the timings. These are the results:

                      no check     dbv     rman
      script duration     8:48    9:31     10:34
verification duration        -    1:42      5:32

In terms of runtimes of course not checking anything is the fastest option.
The second place goes to the verification with dbv as well for the script duration as for the check itself.
Last ist the verification with rman.

So you should go for dbv, shouldn’t you? It’s unfortunately not that easy.

DBV always has to be executed as an additional operations and reports also errors on blocks that are currently not used by the database and thus won’t do any harm.
The reporting facility of DBV is rather weak – if there are several corrupt blocks found, it can get very uncomfortable to create an overview of affected segments.

RMAN on the other hand provides easy to query from database views that make it a “walk in the garden” to query views like V$COPY_CORRUPTION, V$BACKUP_CORRUPTION and V$DATABASE_BLOCK_CORRUPTION.
The downside is – beside the longer runtime – the fact that RMAN needs the DB_BLOCK_CHECKSUM parameter to be set to (at least) TRUE and that it only checks blocks that already got a checksum. If the database got created on Oracle 10g this is always the case but for databases with lower start releases there can still be blocks without checksums.
The major advantage of RMAN is that it can perform the checks when backups are taken on the fly. That does not make restore/recovery tests with subsequent consistency checks superflous but it give a higher security level about the blocks read for the backup.
Here is a similar script like the one I used:

set timing on
create table test as
select a.object_type||a.owner||a.object_name as DATA
from dba_objects a, dba_objects b where rownum <=100000;
alter table test add (data2 char(200));
update test set data2=data;
create index t1 on test (data);
create bitmap index t2 on test (data2);
exec dbms_stats.gather_table_stats(user, 'TEST');
drop table test;
For the verification runs I used
brbackup -c force -w only_dbv

and

brbackup -c force -w only_rmv

Have fun trying it out on your systems!

Best regards,

Lars