Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
pravin_datar
Employee
Employee

Business Planning and Consolidation version for Netweaver extensively uses process chains for running the BPC processes. These process chains are automatically invoked by the BPC application when the BPC user executes processes from the front end. Should these process chains be exclusively executed only within BPC alone or should we able to execute them outside BPC, using native Netweaver BW, may be from within any custom process chains that we may create? Is there any need to do so? And finally, is there any way to do that? Let try to answer these questions in this blog. Let us begin with trying to see if we have any business reason to run the BPC process chain outside BPC application. In order to do that we need to understand how the optimization process works in BPC version for Netweaver.

Optimizing the BPC data model:

A dimension in BPC is equivalent to a characteristic in Netweaver BW and dimension members in BPC are equivalent to characteristic values in Netweaver BW. Taking this further, when a user creates a dimension in BPC version for Netweaver, a Netweaver BW characteristic is generated in the BPC namespace for the same. When a user creates a dimension member for that dimension in BPC version for  Netweaver, a characteristic value is generated in Netweaver BW in the master data of characteristic corresponding to that BPC dimension. When a user creates a BPC application in BPC version for  Netweaver by selecting a few of the BPC dimensions, an infocube (as well as a multiprovider containing that infocube) is generated in the BPC namespace that includes all the characteristics corresponding to the selected BPC dimensions. (You can read more about the BPC namespace at A reservation of a different kind – why, what and how of BPC namespace)

We should distinguish the BPC dimension from the Netweaver BW dimension. In Netweaver BW, the term dimension is used to group the characteristics. How the characteristics in a BPC infocube are organized among the Netweaver BW dimensions within the generated BPC infocube? Well, it depends upon the number of dimensions included in the BPC application. If the number of BPC dimensions in the BPC application is 13 or fewer, then all of them are automatically modeled as line item dimensions in the BPC infocube. This is because Netweaver BW allows upto 13 user defined Netweaver dimensions in an infocube. If the number of BPC dimensions exceeds 13, then the BPC infocube model is automatically generated for those BPC dimensions. The data modeling thus generated while creating the cube may not remain the most optimized one as the fact table of the cube begins to grow. BPC version for Netweaver gives the option to the BPC user to optimize the data model from the front end. As shown below, there are two options to optimize - Lite optimize and Full optimize..

The Lite Optimize option does not make any changes to the data model. It just closes the open request; compresses and indexes the cube and updates database statistics. The Full optimize option is the one that may rearrange the characteristics among the 13 user defined Netweaver BW dimensions. The Full Optimize process will check if the size of the dimension table is less than 20% of the fact table or not and create as many line item dimensions as possible. In order to do this reconfiguration, it takes the appset offline, creates a shadow cube with optimal data model; links the new optimal cube to the multiprovider for the application; moves data to the shadow cube; deletes the original cube; closes the open request; compresses and indexes the cube; updates database statistics and brings the appset online again. Though this results in creating a new infocube, the multiprovider remains the same and all the BPC reports are built on the multiprovider and not the underlying infocube. Hence this optimization does not affect the BPC reports reporting this data. 

Using ETL for BPC infocubes:

Since the data that the BPC user enters from the BPC front end is stored in the underlying real time infocube for that application, one may ask whether it is possible for us to load data to that cube with normal Netweaver BW ETL process. The answer to that is ‘yes' - but with a caveat.

We can use Netweaver BW ETL for the BPC infocubes. Here is an example of a DTP to load data through a flat file to a BPC infocube.

 

Now if the BPC user chooses to do a Full Optimize' for this application, it may result in creating a new infocube with more optimal data model. That new infocube, though gets automatically linked to the multiprovider for the BPC application, at present, does not inherit the ETL structure that was built on the original cube. So in the above example, if the BPC user executes a ‘Full Optimize' for the Finance application, the new optimal infocube for the Finance application may not inherit the DTP created on the original /CPMB/DZID30P infocube. The source system, data source, infosource etc will remain but the transformation that links these to the infocube will get deleted and has to be recreated. If this optimization happens in the production system then the transformation may have to be recreated and transported up the landscape.

A way to obviate such situation is to execute the process chains used by BPC to load data using native Netweaver BW tools, outside the BPC application. In the above example, a flat file is being loaded to the BPC infocube using Netweaver BW ETL tools. However, BPC application itself offers a front end functionality of Data Manager to load data either through a flat file or from any other Infoprovider. Data Manager uses BPC process chains in the background to load the data as shown below.

If we can run these process chains outside BPC  - from the EDW layer using the native Netweaver BW, then not only we can integrate this with the custom process chains but also obviate the issue of ETL structures getting deleted on ‘Full Optimize'. Running BPC process chains outside BPC is also important if we are using open hub and want to automate the flat file load to BPC cubes by creating a user defined process chain that integrates the file creation of the open hub and loading of that file to BPC cube. If by any means, our user defined (custom) process chain (that we create in transaction ‘rspc') can run the BPC process chain to load the data to BPC cube, then we have an ‘industrial strength' solution for loading data to BPC infocubes using Netweaver toolset. The question now becomes how to accomplish this. Let us try to understand the steps involved.

Steps in using BPC process chain within non-BPC process chain:

The first step is to upload the flat file. If we want to use open hub then the open hub can place the file at any specified location on the BPC application server  or we can upload the flat file to the BPC File service (transaction ‘ujfs') as shown below.

The second step is to create a transformation file using the BPC front end. Though we want to run the BPC process chain with native Netweaver tools, this is the only step that we have to do with the BPC front end. This is because the BPC process chain looks for the XML version of the transformation file. When we process the transformation file from the BPC front end, this XML version of the transformation file is automatically created and stored in the file service.

The third step is to create an answer prompt file that passes the required parameters to the BPC process chain. This file should be a tab delimited file. The format of the answer prompt file is as follows:

      %FILE%    'csv file path in file service'

      %TRANSFORMATION%                 'transformation file path in file service '

      %CLEARDATA%       1/0

      %RUNLOGIC%         1/0

      %CHECKLCK%        1/0

Here is an example of the answer prompt file:

The fourth step is to run program ujd_test_package with the right Appset and Application. We should use the answer prompt file created in the above step and save the variant for the program as shown below.

However, please note that this ujd_test_package program was originally designed to assist in debugging the data manager packages. Hence it may not be a bad idea to copy this program to a user defined program and use the user defined program in the next step - just to be on safer side so that if future development changes the nature of this program, then we shouldn't get unnecessary surprises!

Now in the final step, we are ready to create our custom process chain that executes the BPC process chain. As shown below, create a user defined process chain in transaction ‘rspc' and include a process type to execute ABAP program. Include ujd_test_package program (or the user defined program created based on ujd_test_package) with the saved variant.

Activate the process chain and execute the process chain.

Thus we can run the BPC process chain from within non-BPC process chains. These steps will work not only for the process chain to load flat file into BPC infocube with open hub, but also for loading data from other Infoprovider to BPC infocube (using the BPC process chain to load data from Infoprovider)

8 Comments