Skip to Content

Consuming Data From Cloud Storage in SAP BusinessObjects Data Services

In my earlier blog, we had discussed Integrating Big Data Workflow with SAP BODS. In this blog, we will explore how we can directly use Cloud Services on BODS Workflow.

loud Storages are Services provided by major cloud platforms that can store and handle large number of files of huge sizes. AWS S3, Azure and Google provide Cloud Storages that are used for storing ad-hoc files like log, flat files and data dumps. SAP BODS 4.2. SP7 introduced the support for the above-mentioned Cloud Storages.

Consuming data from Cloud Storages in SAP Business Objects Data Services - 1

In this blog, we will consume data from AWS S3. The Steps for the other Cloud Services are similar.

Configuring Cloud Storage Services

The Cloud Storage Services should be configured so that SAP BODS can connect to it. The configuration can be followed from the guide published by the Cloud vendor.

To Connect to AWS S3, we will need to enable IAM access to AWS. Once the IAM access is enabled, then access and secret key must be generated to the IAM user for the S3 which is used by BODS to consume the data from S3.

The access and secret key can be generated from the Users section in IAM. Copy the access and secret key after generation.

Consuming data from Cloud Storages in SAP Business Objects Data Services - 2

Place the required files in S3 bucket to consume it in SAP BODS.

Consuming data from Cloud Storages in SAP Business Objects Data Services - 3

Configuring BODS with the Cloud Services

We need to create a File Locations in SAP BODS that points to the AWS S3. Login to the Designer and navigate to Formats in the Local Object Library.

Consuming data from Cloud Storages in SAP Business Objects Data Services - 4

In the File Locations context menu, select New and create a new Flat File or Excel file depending on your source.

Consuming data from Cloud Storages in SAP Business Objects Data Services - 5

Create the File Location by selecting the protocol as Amazon S3 Cloud Storage. Fill in the Security details of Access and Secret key and select the region. Provide the details of bucket name from which the data has to be fetched and configure the other necessary parameters.

Consuming data from Cloud Storages in SAP Business Objects Data Services - 6

Different Configurations can be set for your Dev/Quality and Production. Azure and Google Cloud can be configured in similar manner.

Create a new Flat File or Excel file depending on the Data Source and Enter the format of the file.

Consuming data from Cloud Storages in SAP Business Objects Data Services - 7

Drag and drop the file in the Data Flow and you can use that Object to perform Transformation and other operations.

Azure and Google Cloud Services can be configured using the above mentioned method and BODS can be used to process files between each other or combine files from them together and process the same.

You must be Logged on to comment or reply to a post.

    Hi Shankar,

    I tried to create File Location for Amazon S3 in Data Services,I could not succeed.

    I am not really sure what needs to be passed for File System (remote directory and bucket).

    Can you please guide me on this.


      • Hi,

        I have requirement to connect BODS with AWS. I have successfully uploaded a file into Amazon S3 bucket. But the requirement is to upload a file into a subfolder under the bucket.

        I checked notes on S3 product notes and it mentions S3 has a flat structure under bucket. Found other applications also have similar issues using subfolders and they use "Key name" concept to refer to file object. Not sure how BODS can handle this

        I have tried to specify subfolder name using different options like in File location connection as well File properties but not successful

        Would appreciate any input on the same




        • Hello,

          BODS version: 4.2 SP7

          I am facing the same issue. I tried several options and couldn’t solve it.

          I am trying to upload a file to Amazon S3 through BODS. I keep getting the error – ‘ AWS S3 bucket <awxyz-sdsa> does not exist’. I have access only to subfolders under the S3 bucket. 

          I tried from S3 browser and I am able to connect to the subfolders under S3 bucket.

          Any help is greatly appreciated!

          Thank you.


  • Hi Shankar,


    Is it possible to read the files from Azure Blob storage? If yes, could you please provide the steps.


    I have created a file location of Blob storage, but not sure how to read the file from that location.

      • Hi Dirk,

        I am actually facing an issue where I set up a File Location to Azure Data Lake, but when I am trying to figure out a way to extract data out of that same folder in ADLS I am having issues.


        So if I am creating a new Flat File that is in ADLS and update the directory to point to that directory do I then need to recreate all the fields from that in the Flat File editor? Is it possible to pull the field names and data types from the file that was written to ADLS? The reason I ask is that we might have files that we want to extract from ADLS using BODS but we might not know all the fields that the file contains.

  • Hi Shankar,

    have you been able to export such an AWS S3 Cloud Storage out from Dataservice?

    When I do an export - with a Export-Password - I get empty nodes / Items in XML / ATL file.

    Importing it into another Repo destroys such Storage.

    In the ATL / XML file it looks like this:



    I also do not find a solution to set the fl_s3_accesskey / fl_s3_secretkey with an executable from Data Service.

    So automating the execution of a job will fail and I have to create such Stores manually.


    Bit of a background:
    I export Job as a XML file and integrate it in an overall ETL-process code

    I create an Jenkins Pipeline which takes the XML and imports it into a Repo

    The pipeline imports Systemconfiguration, Datastores, Flatfiles, etc ...

    The Pipeline executes the job after import

    The pipeline includes also other Deployment Items which are related to this DS-Job


    Everything works fine except the AWS S3 storages. I can not create an XML/ATL export and reimport it (Including Password) withoud destroying it. Creating those Stores manually is not a very generic approach. I would like to automate this.


    Would be grate if you have a hint (Something like al_engine.exe --setSecretOfAWS_accesskey=theKey )





    • Hi Mansur


      See if you can call another script after exporting that can push to S3 and fetch from S3. You can use s3cmd and have the config of access key / secret key set globally and run that command in your Jenkins flow after the sequence.


      This way you would be able to use the S3 part.


      Happy Automation!



  • Thank U , I have a question please

    is it possible to but the bucket name as parameter , I have tired to join AWS without setting the bucket as parameter and I got  no errors … but when i tried to set it as parameter I got errors in the connection

    thank U Again


    • just wanted to inform U, that its possible to set the bucket name as parameter , my issue now is it possible to set the remote directory to a global variable and changing the value of the global variable within the run time? @Shankar Narayanan SGS?