Business Object Data Services (BODS) is a GUI tool which allows you to create and monitor jobs which take data from various types of sources and perform some complex transformation on the data as per the business requirement and then will load the data to a target which again can be of any type (i.e. SAP application, flat file, any database). The jobs created in BODS can be both real time and batch job.
Before getting into much details let’s first look at the architecture of BODS.
It is nothing but a set of tables which hold user created and system defined objects like metadata for source and target, transformation rules. We have three different types of repositories:
- Local Repository
It contains all the metadata about the system objects like workflow, dataflow, datastore etc. In a simple language it can be said as a folder and usually if there are multiple developers each developer is assigned a different local repository so that he can manage his tasks there without creating confusion for other developers. Jobs can be scheduled and executed from here. For maintaining different environments like Dev QA and Production we have local repository in all the environments.
- Central Repository
It is basically used for version control of the jobs, check-in check-out functions can be carried out from this repository. In real time scenarios we use it for the release management strategies. One can easily compare the jobs in local and central repository and hence get to know the changes made in local repository.
- Profiler Repository
It is used for data quality assessment, it stores the profiler tasks: these are the task which are created when a profiling request is submitted from designer or admin console, we can monitor the progress and execution of these tasks here. It is very helpful on the analysis side as we can easily get the insight of the data and the pattern of the distribution of the data.
It contains all the metadata about a repository and can be used for reporting on the metadata available, like reporting on the objects created, child-parent hierarchy etc. There is a complete set of tables and views which can be accessed via SQL commands or choosing metadata reporting in admin console.
Creation of all the BODS object take place here, we can create workflow, dataflow, datastore and other objects. It is a graphical interface where the major task is done by dragging and dropping, here we structure our job and defines the transformation rules which are the major part in an ETL process. (We will look at designer in more detail in next documents)
It contains engines and retrieves job information from the repository and execute the jobs on these engines, a repository can be linked to one or more job server depending on the number of jobs executing at a particular time and also the demand of better performance. It is like an integration unit which holds all the job information and extracts the data from the source systems and loads them to the target system.
These are same as the job server but the difference is that they are used for the real time jobs; it controls the passing of message between source and target in real time using the XML messages.
It is usually referred to as admin console; here we schedule all our batch jobs and can monitor them, here we can get the trace log and error logs which can be helpful in analyzing the execution of the job.
This was all about the basic architecture of BODS system and in the coming documents we will look into the BODS objects like workflow and dataflow in more detail.