Skip to Content
Author's profile photo Former Member

Using OpenHub destination,APD in Process Chain

Though BW is a datawarehousing\Reporting system, sometimes for specific user requirements, the BW has to act like a relational database/sourcing interface and provide a dump of huge data from the query or a cube.

We had one such requirement from a sourcing team. The team wants to analyze/harmonize atleast 1 year(about 2 million records) of Procurement data in their local spreadsheet and sell it to other business units.. The file layout of the data is granular/detailed level.

Business user’s continue to execute the “Procurement detail” Bex query and download it from Bex web output or as a workbook. But, this was taking too much of time for users and sometimes, it even error out as “Time out”.

The users are quit frustrated and  want to get rid of this “wait time”. They were willing to spend more cost, to boost their hardware or install any software that would resolve this problem.

The problem is simple but very much frustrating to users. There are several ways of resolving this specific issue for example : Broadcasting via pre-calculation server.etc…But, I will be explaining here one of the option.

BW tool has these components: Open hub destination/Analysis Process Designer which can be used to export the data from a output of a query or a Object in the background/batch process. We can include the Open hub destination/Analysis Process Designer in the process chain and schedule it  weekly/monthly basis, depending on the business users.

To use the Open hub destination/Analysis process designer in a process chain, you have to write the data to a application server. 

But to write the data to application server, we have to create logical file, path ,assignment for physical file etc.

We will see below step by step how this is achieved.

  1. Go to the logical file path definition screen(via tcode:FILE), choose “New Entries “and enter the “logical file path” as “ZEPM_SBU”  and “name”, then save it. Highlight/Select this specific row and click on “Assignment of Physical path” .LF1.JPG
  2. In the below assignment of physical paths to logical path as below, click on the “New entries”.LF2.JPG
  3. Fill the entries on the below “New Entries” screen : – LF3.JPG
  4. Select the syntax group based on the OS, where the file will be stored. Then, give the physical path as below . Please note , the actual filename here is in the form < FILENAME> which will be filled in the “logical file name definition” screen. Click “Save” on the below screen. –LF4.JPG
  5. Now  select “Logical file Name definition ”  as below and then click on the “New Entries” – LF5.JPG
  6. Fill the details in the below screen : –


7.     The details are filled in the above screen as below : Please note , I have selected the data format as “Spreadsheet format” and then click the “Save” option. The logical path “ZEPM_SBU” has to be filled which is already defined in the above screen..( So, before we define the logical file, it is mandatory that there should be a corresponding logical path to be defined.Once filled as below. Please save it. –


8.     Below step will explain how this newly created “ZEPM_S” logical file will be used in the open hub destination or in APD. Below example is for OHD. Please note the server name that you see is the server where your BW is installed. The Appl Server file name”ZEPM_S” is the logical file name that we created.

        Include the DTP associated with the below OHD in the process chain.

        So this way we can load the data to external server can be loaded in background via process chain.



Assigned Tags

      1 Comment
      You must be Logged on to comment or reply to a post.
      Author's profile photo Gilberto Jose Hernandez Mendoza
      Gilberto Jose Hernandez Mendoza

      Hi, i'm facing this issue, the extraction path contains a white, space for example.

      \\servername-srv-02\CustApps\DataLake DropZone\ReportingDrop-DEV\GSC\Global\SKU\GLB\<FILENAME>

      on runtime the path gets trimmed to


      removing the space in "DataLake DropZone" resulting in a non existing path error,

      how can be avoided this trim


      Thanks a lot