There is one case which took me even more than 1 hour to find correct way to fix it.

The symptom is quite simple:

– We found there is one datafile in v$datafile has name of ‘MISSING9999’.

– DBA_DATA_FILES shows following:

FILE_NAME FILE_ID TABLESPACE_NAME BYTES STATUS ONLINE_STATUS
/oracle/SID/102_64/dbs/MISSING00054 54 PSAPSR3 0 AVAILABLE RECOVER

– At the same time we can see the ‘missing’ datafile exists on disk:  (interesting !!!)

[3]ls -la /oracle/SID/sapdata48/sr3_40/

total 55138644

drwxr-xr-x   2 orasid   dba          512 Sep 16 13:59 .

drwxr-xr-x  12 orasid   dba          512 Sep 16 13:59 ..

-rw-r—–   1 orasid   dba      34358697984 Sep 24 15:59 sr3.data40

Remember that we have nice SAP note 19519 already for long time which described exactly what happened and how to fix it. What an easy job. But is it?

Let’s go through the alert.log from very beginning to understand what happened. (I skip the non-important lines here to save space.)

|Tue Sep 15 16:54:56 AST 2015|

|alter tablespace PSAPSR3 add datafile ‘/oracle/SID/sapdata48/sr3_40/sr3.data40’ size 32767M autoextend off|

|Tue Sep 15 17:10:37 AST 2015|

|Completed: alter tablespace PSAPSR3 add datafile ‘/oracle/SID/sapdata48/sr3_40/sr3.data40’ size 32767M autoextend off|

|Tue Sep 15 17:58:53 AST 2015|

|ORA-19502 signalled during: alter tablespace PSAPSR3 add datafile ‘/oracle/SID/sapdata48/sr3_41/sr3.data41’ size 32767M autoextend off…|

|Tue Sep 15 18:15:02 AST 2015|

|Errors in file /oracle/SID/saptrace/usertrace/SID_ora_15635.trc:|

|ORA-00600: internal error code, arguments: [kdtdelrow-2], [23], [23], [], [], [], [], []|

|ORA-01116: error in opening database file 54|

|ORA-01110: data file 54: ‘/oracle/SID/sapdata48/sr3_40/sr3.data40’|

|ORA-27041: unable to open file|

|SVR4 Error: 2: No such file or directory|

|Additional information: 3|

We have created one new datafile called sr3.data40. (file id #54) But unfortunately, something went wrong at disk side which causes the datafile can not be read/write at that moment. Which leads the datafile to be MISSING. So the original problem was disk issue, and there is nothing wrong with Oracle dictionary nor control file. As long as we take back the file system, which contains this datafile, everything will go back to normal.

As we do confirmed that the datafile itself exists currently, looks to me that the disk issue has been resolved. Why the datafile is still reported to be MISSING? (*1)

And the interesting story goes on.

|Wed Sep 16 10:14:16 AST 2015|

|/* BRSPACE */ ALTER DATABASE BACKUP CONTROLFILE TO ‘/oracle/SID/sapreorg/serkinmy/cntrlSID.old’|

Thanks God that we had a backup of control file soon after. So the complete structure change of datafiles are recorded in this control file at least. And can I use it for recovery if needed? (*2)

|Wed Sep 16 13:45:34 AST 2015|

|Completed: CREATE CONTROLFILE REUSE DATABASE “SID” RESETLOGS  NOARCHIVELOG|

We tried several times to create new control file to fix the problem. I do not know why we wanted to do so, but I believe somebody did not really understand what was exactly wrong here, and most likely the alert.log was not read carefully. It makes the things looks more complicated.

|Wed Sep 16 13:59:48 AST 2015|

|/* BRSPACE */ alter tablespace PSAPSR3 add datafile ‘/oracle/SID/sapdata48/sr3_40/sr3.data40’ size 20000M autoextend off|

|Wed Sep 16 14:05:27 AST 2015|

|Completed: /* BRSPACE */ alter tablespace PSAPSR3 add datafile ‘/oracle/SID/sapdata48/sr3_40/sr3.data40’ size 20000M autoextend off|

Much worse thing is that we created a new datafile which has the same name of the lost data file under the same location. This is very cool to make things more and more complicated.

Now we understand why the ‘missing’ datafile does exist on disk. (*1) If we look back to DBA_DATA_FILES, there is another line:

FILE_NAME FILE_ID TABLESPACE_NAME BYTES STATUS ONLINE_STATUS
/oracle/SID/102_64/dbs/MISSING00054 54 PSAPSR3 0 AVAILABLE RECOVER
/oracle/SID/sapdata48/sr3_40/sr3.data40 57 PSAPSR3 34,358,689,792 AVAILABLE ONLINE

|Wed Sep 23 17:38:31 AST 2015|

|Completed: /* BRSPACE */ alter tablespace PSAPSR3 add datafile ‘/oracle/SID/sapdata45/sr3_45/sr3.data45’ size 30000M autoextend off|

And we even added several new data files afterwards, which is currently in use. So this makes the good backup of control file to be useless for me any more. (*2)

Now everything is clear now by reading the whole story of alert.log file:

  1. The missing datafile is caused by I/O problem at the beginning.
  2. A new datafile which has the same name was created under the same location as the missing one.

What is the solution to fix this problem then?

I think we have to reorganize the complete tablespace PSAPSR3, which contains the missing datafile to fix this problem.

To report this post you need to login first.

1 Comment

You must be Logged on to comment or reply to a post.

Leave a Reply