In this blog, I will go over the step-by-step instructions for creating a BW Archive Object for InfoCubes and DSO’s and will also provide some SAP recommended BW housekeeping tips.
To start with, I thought I would go over some differences between ERP Archiving and BW Archiving:
Delivered data structures/business objects
Delivered archive objects (more than 600 archive objects in ECC 6.0)
Archives mostly original data
Performs an archivability check for some archive objects checking for business complete data or residence time (period of time that must elapse before data can be archived)
After archiving, data can be entered for the archived time period
Generated data structures
Generated archive objects
Archives mainly replicated data
No special check for business complete or residence time
After archiving a time slice, no new data can be loaded for that time slice
To begin archiving, you will need to perform the next steps:
Set up archive file definitions
Set up content repositories (if using 3rd party storage)
Create archive object for InfoCube/DSO
Step 1 – To begin archiving, you will need a place to write out the archive files. You do not necessarily need a 3rd party storage system (though I highly recommend one). But, you do need a filesystem/directory in which to either temporarily or permanently “house” the files.
Go to transaction /nFILE
Either select a SAP supplied Logical File Path, or create your own.
Double click on the relevant Logical File Path, then select/double click on the relevant Syntax group (AS/400, UNIX, or Windows).
Assign the physical path where the archive files will be written to.
Next, you need to configure the naming convention of the archive files.
Select the relevant Logical File Path, and go to Logical File Name Definition:
In the Physical file parameter, select the relative parameters you wish to use to describe the archive files. See OSS Note 35992 for all of the possible parameters you can choose.
Step 2 – If you will be storing the archive files in a 3rd party storage system (have I mentioned I highly recommend this), you need to configure the content repository.
Enter the Content Repository Name, Description, etc. The parameters entered will be subject to the 3rd party storage requirements.
Step 3 is to create the archive object for the relevant InfoCube or DSO:
Go to transaction RSA1:
Find and select the relevant InfoCube/DSO, right-click and then click on Create Data Archiving Process.
The following tabs will lead you through the rest of the necessary configuration.
The General Settings tab is where you will select whether you are going to configure an ADK based archived object, a Nearline Storage (NLS) object or a combination.
On the Selection Profile tab, if the time slice characteristic isn’t a key field, select the relevant field from the drop down and select this radio button:
If using the ADK method, configure the following parameters:
Enter the relevant Logical File Name, Maximum size of the archive file, the content repository (if using 3rd party storage), whether the delete jobs and store jobs should be scheduled manually or automatically, and if the delete job should read the files from the storage system.
You then need to Save and Activate the Data Archiving Process.
Once the archive object has been activated, you can then either schedule the archive process through the ADK (Archive Development Kit) using transaction SARA, or you can right click on the InfoCube/DSO and select Manage ADK Archive.
Click on the Archiving tab:
And, click on Create Archiving Request.
When submitting the Archive Write Job, I recommend selecting the check box for Autom. Request Invalidation.
If this is selected and an error occurs during the archive job, the system will automatically set the status of the run to ‘99 Request Canceled’ so that the lock will be deleted.
If submitting the job through RSA1 -> Manage, select the appropriate parameters in the Process Flow Control section:
When entering the time slice criteria for the archive job, keep in mind that a write lock will be placed on the relevant InfoCube/DSO until both the archive write job and the archive delete job have completed.
Additional topics to consider when implementing an archive object for an InfoCube/DSO:
For ODS objects, ensure all requests have been activated
For InfoCubes, ensure the requests to be archived have been compressed
Recommended to delete the change log data (for the archived time slice)
Prior to running the archive jobs, stop the relevant load job
Once archiving is complete, resume relevant load job
In addition to data archiving, here are some SAP recommended NetWeaver Housekeeping items to consider:
From the SAP Data Management Guide that can be found at www.service.sap.com/ilm
(Be sure to check back every once in awhile as this gets updated every quarter).
There are recommendations for tables such as:
RSPC* (BW Process Chains)
There are also several SAP OSS Notes that describe options for tables that you do not need to archive:
Search SAP Notes on Clean-Up Programs
Clean-up program RSBATCH_DEL_MSG_PARM_DTPTEMP
Clean-up program RSARFCER
Clean-up program RSDDK_STA_DEL_DATA
Clean-up program RSRA_CLUSTER_TABLE_REORG
Clean-up program RSPC_INSTANCE_CLEANUP
This is some basic information to help you get started with a BW Archiving strategy. I hope you find it useful.