Skip to Content
Author's profile photo Former Member

Capture Killed job status in BODS

Error Handling and recovery mechanisms are very important aspect of any ETL tool. BO Data Services have in-built error-handling and automatic recovery mechanisms in place. Also by using different dataflow designs, we can manually recover a job from a failed execution and ensure proper data in the target.

In manual recovery, Each and every dataflow/workflow’s execution status should be captured in a table(we call it as control table) which helps to execute only failed datflow/workflow in next run.

But if we have got a scenario where the job is stuck and we have to kill the job manually then the status of the killed job will not be automatically updated from ‘Running’ to ‘Killed’/’Failed’ in the control table as when a job is killed,job gets terminated there itself,the flow doesn’t go inside catch block also where we put the script or dataflow to capture the job status.

In this scenario, We can put a script at the starting of our job which will first check the previous execution status of the job in control table,if it shows ‘Running’ then we can update the previous instance status in control table as ‘Failed’/’Completed’ using AL_HISTORY table(This metadata table captures all the jobs status with job name,job runid,start and end date):

$G_PREV_RUNID = sql(‘<DATASTORE_NAME>’,’select max(JOB_RUN_ID) from JOB_CONTROL where JOB_NAME = {$G_JOB_NAME } and JOB_STATUS = \’R\”) ;

$G_ERR_STATUS = sql(‘DS_DBH’,’select STATUS from AL_HISTORY where SERVICE = {$G_JOB_NAME } and (END_TIME) = select max(END_TIME) from JOB_CONTROL where JOB_NAME = {$G_JOB_NAME }) ;



AL_HISTROY table contains following columns :


NOTE : We need to have ‘Select’ access to the database on which BODS repository is created.

Assigned Tags

      1 Comment
      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      Hello Tanvi,

      Thanks for the above blog but the above condition would not work if the job is aborted or job server down then the control table will not be updated with the end time. The end time in the control table contains null. I have also checked that there is a time difference between AL_HISTORY table and Control table.