1.0          Overview

Error handling refers to the anticipation, detection, and resolution of programming, application, and communications errors. For a successful Job, we need to provide a code path, which rectify the different exceptions and continue the job execution.

Error Handling is an important aspect of any ETL tool. BO Data Services have in-built error-handling and automatic recovery mechanisms in place. Also by using different dataflow designs, we can manually recover a job from a failed execution and ensure proper data in the target.

In this document, we will cover different error handling techniques and recovery mechanisms in BODS.

2.0            Different ways to handle Errors in BODS –

2.1       Error Handling Using Try & Catch

A try/catch block is a combination of one try object and one or more catch objects that allow us to specify alternative work flows if errors occur while executing a job.

Try & Catch.png

2.1.1        Catch Exceptions

The following screenshot describes exception groups that we can catch in a try/catch block:


2.1.2        Catch scripts

A script is the most common action that a catch executes for a thrown exception. The catch script can contain the following:

·         Catch error functions and other function calls

·         Nested try/catch blocks

·         If statements to perform different actions for different exceptions

The syntax for a try/catch block within a script is as follows:









2.1.3        Catch error functions

You can only invoke these error functions inside a catch script, a user function, or in an audit script for a data flow.

·         error_timestamp() – Returns the timestamp of the caught exception.

·         error_context() – Returns the context of the caught exception.

·         error_message() – Returns the error message of the caught exception.

·         error_number() –  Returns the error number of the caught exception.

2.1.4        Catch best practices

For each catch object in the try/catch block, specify the following:

A)     One or more groups of exceptions that the catch object handles. If we want to assign different actions to different exception groups, we need to add a catch for each set of actions.

B)      The actions to execute if an exception in the indicated exception groups occurs.

C)      Optional error functions inside the catch block to identify details of the error.

D)     For batch jobs only, we should not refer output variables from a try/catch block in any subsequent steps if we are using the automatic recovery feature. Referencing such variables could alter the results during automatic recovery.

E)      Also, try/catch blocks can be used within any real-time job component. However, try/catch blocks cannot straddle a real-time processing loop and the initialization or cleanup component of a real-time job.

2.2       Handling User Defined Exception

BODS provide two functions to generate User Defined Exception. These functions are :

A)     Raise_Exception     – Calling this function causes an exception to be generated along with the user defined error message, which is provided as a parameter for this function.

B)      Raise_Exception_Ext – Calling this function causes an exception to be generated along with an exit code.

The work flow or job may or may not be terminated based on whether a try-catch block surrounds the call.

2.3       Error Handling using Files

Error handling with the use of Files( as source or target ) can be achieved by the following ways :

2.3.1        Error Handling in File Formats

We can configure the File Format Editor to identify rows in flat-file sources that contain the following types of errors:

·         Data Conversion Errors — Datatype mismatch between file format and data. Suppose a field might be defined in the File Format Editor as having a data type of integer but the data encountered is actually varchar.

·         Row Format Errors — In the case of a fixed-width file, the software identifies a row that does not match the expected width value.


In the File Format Editor, the Error Handling set of properties performs the following actions on selection:

o   Log the data-type conversion or row-format warnings to the Error Log.

o   Limit the number of warnings to log without stopping the job.

o   Capture or write the errors in the Error File.

o   Check for either of the two types of source flatfile error (Data Conversion & Row Format ).

o   Write the invalid rows to a specified Error file in the Job Server.

o   Stop processing the source file after reaching a specified number of invalid row count.

o   The error file format is a semicolon-delimited text file.

2.3.2        Using Error Log

Error logs help us to determine the reason of job execution failure. The Error log lists errors generated by Data Services, by the source or target DBMS, or the operating system. If the execution completed without error, the Error log is blank. On the other hand Trace logs are used to determine where an execution failed.

2.3.3        Using Get_Error_Log operation

SAP BusinessObjects Data Services produces several types of log information for a batch job published as a web service. Use the Get_Error_Log operation to retrieve the error log for a batch job.

2.3.4        Using Overflow File in Source

For Oracle source tables you can use an overflow file for error handling. Select Yes for Use overflow file and enter a name for the file. Errors that occur while reading data are logged into the overflow file and the job execution proceeds while ignoring the rows that cause the error.

2.3.5        Using Overflow File in Target

If a row cannot be loaded it is written to a file. When this option is set to Yes, options are enabled for the file name and file format.


3.0          Recovery Mechanism:

If an SAP Data Services job does not complete properly, we must fix the problems that prevented the successful execution of the job and run the job again. However, during the failed job execution, some data flows in the job may have completed and some tables may have been loaded, partially loaded, or altered. Therefore, we need to design the data movement so that we can recover—that is, rerun the job and retrieve all the data without duplicating or missing the data.

Following are two techniques to recover from unsuccessful job executions: Automatically recovering jobs, and Manually recovering jobs using status tables.

3.1       Automatically recovering jobs

A BODS feature that allows to run the unsuccessful jobs in recovery mode. With automatic recovery, the software records the result of each successfully completed step in a job. If a job fails, you can choose to run the job again in recovery mode. During recovery mode, the software retrieves the results for successfully completed steps and reruns uncompleted or failed steps under the same conditions as the original job. For recovery purposes, the software considers steps that raise exceptions as failed steps, even if the step is caught in a try/catch block.

Automatic recovery of job can be achieved

1)      Enable recovery

To enable recovery of a batch job, execute it initially with Enable recovery (execution property) selected. This selection will prompt the software to record the results of each successful step.


2)      Recover from last failed execution

In case the job fails, rerun the job with this option. ‘Recover from last failed execution’ reruns uncompleted or failed steps under the same conditions as the original job.


3.2       Manually recovering jobs using status tables

We need to design the jobs and work flows so that it can manually recover from an unsuccessful run.

A job designed for manual recovery should have certain characteristics:

o   We can run the job repeatedly.

o   We can implement special steps to recover data when a step did not complete successfully during a previous run.

The above functionality can be achieved by using an execution status table to produce jobs that can be run multiple times without duplicating target rows. The table records the job’s execution status. A “failure” value signals SAP BODS to take a recovery execution path.

3.3       Ensuring Data is not duplicated in target

To ensure that we do not insert a duplicate row in the target table during recovery (manual or automatic recovery), we need to follow certain design steps as below: 

§  Data Flow Design

The target table can be replaced every time the job is executed. This method is preferred if a significant percentage of the target table rows needs to replaced.

§  Auto-correct load target table option

If this option is selected, the job server checks for the existing rows in the target table before the new rows are inserted or updated in the target table. The disadvantage of this method is that it slows down the job and hence needs to be used only when a small percentage of the target rows have changed.

§  Pre-load SQL commands

These can be used to remove partial database updates that have occurred during an incomplete job execution. Typically, the pre-load SQL command deletes rows based on a variable set before the partial insertion began.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

Leave a Reply