Skip to Content
Applies to:
This Article applies to SAP BI 7.0.

This article provides step by step procedure on how to archive data in write optimized DSO using ADK method. The purpose is to explain detailed technical procedure of ADK archiving, deleting and reloading archived data in the write optimized DSO. For archiving, we need to create the logical file and physical file for each DSO based on some properties using Archiving Object. Developers who want to understand stepwise implementation of SAP BI ADK archiving for write optimized DSO will be benefited from this article.

Author:          Rati Verma
Company:     Infosys Limited
Created on:  11 April 2012

Author Bio
Rati Photo.png
Rati Verma is a SAP BW consultant with Infosys Ltd with 4+ years of relevant experience. She has worked in various BI/BW implementation and support projects.

Table Of Contents


SAP BW projects have to handle huge volumes of data. The database size and the data in it are of high importance for every organization. Although, over a period of time, the large amount of accumulating data becomes a point of concern as the database increases in size. Most of this data is inactive and it becomes difficult for the organizations to take care of such data that substantially increases day after day. This eventually leads to increase in query execution time and brings in serious issues with the system performance and maintenance.
As a resolution to the above issue, the concept of BW Archiving comes into picture. In BW archiving the inactive data present in the Infoproviders can be deleted and transferred to some alternate storage systems. This data can be reloaded back in case of special requirements.

Business Scenario

We have data in our system that we do not want to delete for good; instead, we want to move it away from our main data targets (InfoCubes and DSO Objects). We want to improve the load and query execution performance by decreasing the volume of data. In addition, we want to restrict system growth, and have decided to move some of the data onto a slower storage medium, which is also cheaper than the expensive, quickly accessible Infoprovider memory within our system. We may need to retrieve this data sometime in the future and therefore, we think of archiving our data.

SAP BW Archiving

SAP BW Archiving is the perfect solution for handling the high volume of inactive data present in the database. The creation of Data Archiving Process is very easy process as the Archiving Object is created by the system itself.
In this article we will be focusing on the ADK process of archiving for Write optimized DSO’s. Archiving Data Kit (ADK) is a tool provided by SAP. It acts as abstract layer between the SAP applications, the data and the archive files.

Archiving Process

The Archiving Process in SAP BW 7.0 can be divided into three main sections:
  1. Creation of Data Archiving Process – First, we define the Data Archiving Process and all the necessary settings which are required to archive data from a Write Optimized DSO.
  2. Performing the Write operation – Next, we perform the write operation on the data to be archived from the DSO into the archive files.
  3. Performing the Delete operation – Finally, we perform the Delete operation on the data which has been archived from the Write Optimized DSO.

Creation of Data Archiving Process

To demonstrate the Data Archiving Process an active Write Optimized DSO has been selected which is having data loaded into it.
For a Standard DSO the data for archiving is selected depending on the Filters given in the Selection conditions, but for a Write Optimized DSO the archiving happens based on Requests.

Step 1: Creating the Data archiving process (DAP) from transaction code RSA1

Search and select the Write Optimized DSO which we want to archive from transaction code RSA1->InfoProvider and right click to access the Context menu. From the Context menu select “Create Data Archiving Process”.

Step 2: General settings tab

‘ADK-Based Archiving’ check box is selected by default. If you have Nearline storage system available, you can select it here. New Archiving Object is created by the system for every InfoProvider chosen for archiving. Archiving Object name will begin with ‘BW’ and will have ‘O’ for DSO or ‘C’ for a cube, which is followed by seven characters of the InfoProvider technical name.
As this is a Write Optimized DSO the ‘Request-Based Archiving’ radio button is selected by default.
Note: The ‘Nearline connection’ technology is out of scope of this article.

Step 3: Selection Profile tab settings

In the ‘Selection Profile’ tab, select the characteristic for time slicing. Below we have selected ‘Request loaded date’ as the Time slicing characteristic.

Step 4: Semantic Group tab settings

In the ‘Semantic Group’ tab, we can make selection on which fields the newly created archive file is sorted on. For a Write Optimized DSO by default ‘Request GUID’ and ‘Data Package ID’ is selected. We will keep this selection as it is.

Step 5: ADK tab settings

On the last tab ‘ADK’, we specify all the properties which will be used for archiving the data.
We can select the ‘Logical File name’ here. For the purpose of demo, the default file name has been chosen. If we want we can have a new file name and path by maintaining it in transaction code ‘FILE’.
We can also set the size of the archive file in MB which will get created during the archiving process. If the first archive file which gets created exceeds this size, a second file is created and so on.
We can also select how we want to start the delete job once the write job is finished. It is a good practice to start the delete job manually. If needed, delete job can be started either automatically or after occurrence of an event.
In this tab we can also specify the name of the content repository in which we want the archive files be stored after creation. In this case a default value of content repository has been selected for demonstration. Content repository can be created in the transaction code ‘OAC0’.

‘Start automatically’ option if selected stores the archive files automatically after they get created in the content repository. If we don’t select this option the archive files need to be stored manually by using the ‘Store system’ à‘Store files’ option in transaction ‘SARA’ after the write job has finished shown in below screenshot.

In the ‘Sequence’ section we can specify whether we want the data from the DSO to be deleted before storing the Archive files in the content repository or vice versa.
The ‘Delete Program Reads from Storage System’ check box is ticked when we want the delete program to take a copy of the data which is saved in archive files in the content repository and match it with the data in the DSO and then proceed with the delete operation on the DSO data.
The Data Archiving process needs to be saved and activated after this.

Performing the Write operation

Step 1: Go to archive administration of the DSO

To perform the write operation on a write optimized DSO we need to go to the ‘Archive Administration’. For this we need to select the option ‘manage ADK Archive’ by right clicking on that particular DSO. Please find the below screenshot for reference.

After the above step we come to the SARA transaction.

Step 2: Create a variant on the SARA screen

Below is the screenshot of the screen we come to after Step 1.

We can see that the Archiving Object gets automatically created by the system.
Here we need to click on the ‘Write’ button and create a variant, which is shown in below steps

Step 3: Name the variant and give selection conditions

Below screenshot shows the variant creation screen.
Here we can specify the name of the variant which we want to create and then click on the ‘Maintain’ button.
Below is the screenshot which shows the screen where we provide selection conditions in the variant to perform the archiving process.
In the case of write optimized DSO’s we can enter the selections only through ‘relative’ option whereas in standard DSO’s we have options for relative as well as absolute values to be entered for the selection criteria.
Note: The absolute value input is out of the scope of this article.
In the above screen depending on the value with which we want the ‘Loading Date’ field to be populated, we need to fill the number of days in the “Only Data records Older Than’ field. In our example we have entered 959 days. Accordingly the system calculated ‘959’ days prior to the current system date and populated the ‘Loading date’ field.
Instead of Days we can also calculate the ‘Loading Date’ value depending on Year, Half-Year, Quarter, Month and Week. We can select between ‘less than’ or ‘less than equal to’ logical operators for selection condition of the ‘loading date’ field.
Automatic request Invalidation option: whenever we provide selection conditions in the variant for archiving data the data gets locked for archiving during the write job. If suppose some error occurs and the write job gets cancelled, then that data cannot be archived again as it is locked. Using the ‘Automatic request invalidation option’ the selected data can be automatically unlocked so that it can be available for archiving again.
We can manually unlock this data in the DSO’s manage options archiving tab by invalidating the archiving request which got created along with the cancelled write job if the automatic request invalidation option is not selected.
Note: The archiving tab gets created in the DSO’s manage after the Data archiving process (DAP) for that DSO has been created, saved and activated.
Processing Options: In processing options, it is necessary to select the Production mode option as in the Test mode option only a simulation of the archiving process takes place and not actual archive files get generated. In the Production mode actual archive files get generated.
Other settings: We also have other settings where we can select the type of log, its output type etc. After these settings are done we can provide description to our variant, save it and click on the back button.

Step 4: Maintain start date and spool parameters for the variant

In the below screen we need to provide the ‘Start date’ and ‘spool parameters’.
Start date: The start date option helps us in scheduling the archiving process. We can schedule the job immediately, at a particular date or time, after a particular job gets finished, after an event or at an operation mode. In our example we will schedule the job immediately. For doing this we need to select the ’immediate’ button and then click on save. Below screenshot explains the ‘Start date’ options.
Spool Parameters: The spool parameters option helps us to choose the print parameters for the archiving log. Below screenshot shows the different options.
After this is done we can start the archiving job by clicking on the execute button and then view the job logs by clicking on the job logs button. Please refer to the above screenshot.
In the below screenshot we can see that the write job as well as the storage job has finished. The storage of the archive files has happened automatically because we have selected the option ‘Start automatically’ during the creation of data archiving process (DAP) explained previously in the article. We can even see the start time & end time of the jobs and also the total time taken by each job to get finished.
Below screenshot shows the job log of the archiving write process for the write optimized DSO. In the highlighted portion we can see the archive file name and its location. Also we can see the number of records which have fulfilled our selection criteria provided in the variant. In our case 1213959 numbers of records got selected for archiving. The job got finished and we have created the archive files.
For a write optimized DSO as the archiving happens request based, we can have a look at the requests tab in manage of our DSO. The requests that have been selected for archiving have clock symbol in the ‘Request is archived’ column. As we have given the request loaded date as less than equal to ‘07/20/2009’ all the requests under this category have this clock symbol/wp-content/uploads/2012/04/17_91702.png.Below screenshot shows the same.
Note: The fourth tab i.e. the Archiving tab seen in the above screenshot gets generated when the data archiving process (DAP) is created and activated for the DSO.

Viewing the data in archived file:

Go to the transaction /n/PBS/CBW. Select the Infoprovider on which archiving job has been performed and select the ‘SAP archive file browser’ and then click on the execute button.
Below screen is seen when we follow the above steps. Here we can see the archive files available for the particular DSO and archiving object. To view the data simply double click on the file.
In the below screenshot we can view in the archived file. The data shows the various columns in the DSO displayed vertically on the left. Each row of data gets displayed in vertical format.
Note: Creating indexes on the primary characteristic used for archiving improves the archiving jobs performance by reducing the time required for archiving.

Performing the Delete operation:

The main reason behind archiving the data is to free the database space with inactive data. For this after archiving is complete, we need to delete this archived data from the database. Archiving of this data is done before deleting it from the database as a part of the preventive measures of not losing this data forever.
The deletion process can be scheduled for automatic execution asdiscussed in the Data archiving process (DAP) section explained in this article before.
In our example we will perform the delete operation manually.

Step 1: Start delete job from SARA transaction

Go to transaction ‘SARA’ and select the particular archiving object.
Click on the delete button, shown in the screenshot.

Step 2: Maintain archive selection, start date and spool parameters for the delete job

In the below screen we need to maintain three things: ‘Archive selection’,’ Start date’ and Spool parameters’.
In ‘Archive selection’ we can select among the archived files which file we need to delete. Select the ‘Archive selection’ button.
In the below screen we can select the archiving file which got created when we performed the archiving operation for deletion. We can also see that the status of the archive file is ‘Write completed’.
After the above step we can also maintain the ‘Start date’ and the ‘Spool parameters’ same as we did in the archiving write process explained above in the article. After all the three options have been maintained we can execute the delete operation and then view the logs as explained in the below screenshot.
The below screenshot shows the deletion log in which the highlighted part shows the selection condition used for deletion of data from the DSO and the number of records that got deleted from the DSO. The number ‘1213959’ matched with the number of records we had archived in the write operation explained previously in the article. In our case as we had taken a write optimized DSO, the selection conditions show the request numbers of the various requests which fulfilled our deletion conditions. In the case of a standard DSO the data gets deleted from the active table of the DSO. In our case of write optimized DSO the DSO has only one table i.e. the active table.
In the below screenshot we can see the request tab in the manage of the write optimized DSO. We can clearly see that the request numbers from which the data got deleted are having the tick mark /wp-content/uploads/2012/04/26_91711.png in ‘Request is archived’ column. Hence the data has been deleted as per the selection conditions i.e. Request based from the database.

Performing the Reload operation:

The reloading of the deleted data back to the database is performed only in exceptional cases. This process is carried out rarely because we archive only that data from the database which we know will not be required anytime in future.
These exceptional cases maybe when we have archived wrong data or for auditing purpose.

Step 1: Start the reload job from SARA transaction

Go to transaction ‘SARA’ and select the appropriate archiving object. From the menu bar select Goto à Reload as shown in the below screenshot.
SAP also warns us during the reload process by displaying the below popup message.

Step 2: Create variant for reload and define other settings

In the below screen we need to create a variant for reloading the data and then maintain it. For example we have created a reload variant with the name ‘ZRELOAD”.
After the reload variant has been maintained, explained in below steps we also need to maintain the other three options namely ‘Archive selection’, Start Date’ and ‘Spool parameters’ same as we did in the delete operation explained previously in this article.

Step 3: Provide description for the reload job

In the below screen we can select ‘Test run’ if we want the reload to happen in simulation format or ‘reload’ mode for the data to be actually reloaded back in the database, under the ‘Process Flow control’ section.
We need to provide a description of reloading in the ‘Archiving session note’ field. After this we can save the variant.

Step 4: Execute reload job and view job logs

After the variant has been saved and the other three options explained above have been maintained, we are all set to execute the reload job. For that we can click on the execute button and then view the job logs by clicking on the logs button as shown in below screenshot.
In the below screen we can view the job log of the reload job we just started. The highlighted portion shows the archive file name and the numbers of records that got reloaded back to the database. This number is same as the number of records we archived and deleted from the database.
The details about the complete archiving process can also be seen in the ‘archiving’ tab in manage of the DSO.
In the below screenshot we can see one archiving request and one reloading request in the ‘archiving tab’ in manage of the DSO. We can also see the selection conditions the number of records archived and reloaded and other such details.
In the ‘Request type’ column we can see the reload request with a green arrow and the archiving request with a yellow arrow.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Christophe Glauser

    Very good document. Thanks. But could anybody tell me what to do, if

    1.) we don’t need the data in the dso anymore which is older than 12 month? I could rchive the data into files in delete the files, but is there another possibility without archiving?

    2.) To archive the data is one way, but I still have the request information in the dso which goes back to may 2005. Is there a way to delete the request information eaven if this requests are forwarded to other dso. It is not allowed to delete a request in a normal dso if the request is forwarded into another dso.




Leave a Reply