How to create a Project, Job and a simple One To One Mapping in SAP-BODS(Data Services) in a very detailed and easy steps.
The main agenda behind creating this blog is i will be describing each and every single step with screen shots on how to execute a JOB in SAP-BODS Designer.
Points to remember:
- We have to mention our credentials correctly.
- The naming format is important in BODS as clients can easily analyze it.
What does it do:
Lets make it simple “it extracts data from source systems and load data to target systems.”
Click on the “SAP Data Services Designer” on your windows which displays this dialogue box.
Fill the required credentials.
Click on “Log On” to proceed.
After clicking Log On you would be selecting your “Repository” which you have created, in my case “DS_REPO” is my repository.
Now recheck each and every credentials you provided were right and then proceed by clicking “OK”.
After clicking “OK” it takes a while and then pop’s up the Designer Window.
To create a job, firstly we need to create a Project. On top left of the window you can see “Project” click that, and then select new and again select project. Besides you can also enter (Ctrl+n) to create a project instantly.
After selecting project a mini window appears where we have to name our project. We have to be particular in naming convention, everything should be in Capital words and it should be like “Project_Company_Project name_Developer name”. Then click “Create”.
On slightly top left side of the window you can see your “Project” is created in “Project Area”.
Right click on your project where you can find “New Batch Job” click that!
Then inside(below) your Project you can find your “JOB” created. The naming convention of job is “Job_Company_Project name_Developer name”.
Now we have to create a Work Flow in JOB. On the right side of your window you can see a toolbar vertically, press the second option which i had highlighted with red mark.
After pressing the second option just drop in on other side where “Work Flow” will be created. The naming convention we use here is “WF_Company_Project name_Developer name”.
“Double click” the Work Flow the it will be assigned inside(below) the job. All the work is done
inside the Work Flow.
Inside work flow we have to create “Date Flow” which is used to extract, change and load data from the source to the target system. It can be created just as Work Flow, as you already know the tool bar which is present on the right side of the window just click the icon below the work flow icon and drop it on the other side you can see the data flow created. Data Flow also has the
naming convention “DF_Company_Project name_Developer name”.
“Double click” on Data Flow then you will enter into Data Flow where you can create source files and MAP it to target files. I highlighted the Data Flow with yellow mark.
Now on below left side of the window you can find your Repository, where you can select the type of files which you wanted to retrieve. As of now I’m choosing “Flat Files” just right click on your selected file and press “New”.
After clicking New a new Window Pop’s up, here on left side you can see “NAME” in General where you have to mention your name.
Now we have to give the path of our file in “Root directory” by clicking it a window pop’s up where you have to choose the path of your file and then click OK. Then your path appears in Root directory.
Below the Root directory you can find the “File name(s)”, after clicking it you can see a window pop up on right side, select the the file you want and click Open.
After you click “Open” instantly a window rises asking you whether to Overwrite the current schema with the schema from the file you selected. Click “Yes” then it overwrites.
After clicking Yes the data in the file will be dumped here. To arrange it in correct manner change the Column from Comma to “Tab” in Delimiters.
On selecting Tab the data in the fields are been arranged in correct manner, also change the Skip row header from No to “Yes” where the field names will we allocated. Then click “Save&Close”.
Then your file is been retrieved to your Repository. You can drag your file and place it on right side where it asks you to make it “Source or Target file”.
In this scenario I’m making it a Source file, so just click on “Make Source”. Then your source file will be appeared. The naming convention is “SRC_Company_Project name_Developer name”.
Now we have to MAP our Source file to a “QUERY” we can find it on the tool bar present on extreme right side of the window i highlighted it, just drag it near the source file and MAP it, even Query has the naming convention “QUERY_Company_Project name_Developer name”.
Double click on the “Query” a new window pop’s up where you have to “MAP” the fields from “Schema In” to “Schema out” you can do it just my dragging them.
The scenario after “MAPPING”. In Query we can do changes to our fields, and then after close the Query window.
Now we need a “Target file” to dump the “MAPPED” data into it. To get the Target file just click on the “Template Table” which is below Query at the toolbar present in the right side of the window. Just click it and drop it beside query and “MAP” it. Before mapping a window rises asking you to give the Table Name. The Target file also has a naming convention that is “TGT_Company_Project name_Developer name”. After giving your credentials click “OK”. Below select your Datastore.
Once the Target File is created then its been stored in your “Datastore” which you can find on the extreme left-below of your window which i had highlighted.
After clicking ok MAP the Target File with the Query, where the whole thing in yellow mark represents “ONE TO ONE MAPPING”. Here the data in query is been linked with Target File.
On the top left of the window you can see the highlighted “icon” in toolbar which represents “Save All” i.e it saves your Data Flow. Click that!
After clicking “Save All” A dialogue box appears asking you whether to confirm your action on saving the Data Flow. Click “OK” then it saves your Data Flow.
After saving, you have to click Validate All icon in upper toolbar which i had highlighted It literally checks your whole Project and produces Errors if found any, if your project is clean it says “No Errors Found”. Then close the Output window.
Finally its time to EXECUTE your job, on top left of your window in Project Area you can find your “JOB” under PROJECT, right click on the job then press “Execute…”.
After clicking “Execute…” a dialogue box will be appearing as Execution Properties, Click “OK”.
If all your credentials and process went in a right way it displays the whole execution(tracing) process of your job and at the end you can see your JOB is completed successfully. This is the whole trace of your JOB.
That’s all for now folks! Next week i’ll be coming up with the same topic but ill show you guys how to execute a Job via ‘Data Services Management Console’