Skip to Content

Semantically Partitioned Objects (SPO) in SAP BW

Business Scenario:

An organization is operating its business in many continents/regions. When it comes to data analysis and decision making based on the BW data, the organization might face various problems because of the larger geographical area and different standard time zones. There might be following problems:

  1. The data would be very huge. It will take significantly more time to get even a small data records.
  2. Data loading is usually done in non-business hours. Because of the different time zones, some down time would be required. Reason is that when data load is done as per the non-business hours of 1 region, it could be business hours in other regions.
  3. If an error occurs in the data of a particular region, it will have impact on the global business. Request will be red and concerned persons from other regions also would not be able to do the analysis.


To overcome such hurdles, SAP has provided an InfoProvider, Semantically Partitioned Object. It is an Infoprovider that contains many InfoCubes or DataStore Objects with same structure. At the time of defining the InfoProvider, we decide to make it as Semantically Partitioned Object.

Note: This feature is available for BI 7.3 or higher version.

Creation of Semantically Partition Object:

1. Right Click InfoArea – > Select Create InfoCube


2. Following screen is displayed. Enter the name of the infoCube. Tick the check box ‘Semantically Partitioned’ and click Create button.


3. Enter the characteristics InfoObjects for this InfoProvider.


4. Enter the KeyFigures.


5. Click Maintain Partition as shown below:


6. We get the following screen:


7. Select the characteristics based on which partition has to be created.


Here we are creating partition based on geography.

8.  Click Partition to create required number of partitions. Enter the geography name as required.


9. Here we have created 4 partitions as shown:


10. Now activate the InfoProvider as shown below:


11. InfoProvider gets activated. Now create the transformation as shown below:


12. Enter the details for source.


13. Transformation is created. Activate the transformation.Now create the DTP as shown below:


14. Click Folder to create a new template folder as shown below:


15. Enter the name of Template folder:


16. Right click the created template (ZGEO_SL)  à  Click Create New DTP Template:


17. Enter DTP Template name.


18. DTP setting screen is displayed:


19. Click Save. Following screen is displayed.

      Select DTP Template and the DataSource connected to first partition. Click Assign.


20. Following screen is displayed. Select the ‘Not Yet Generated’ DTP and click Generate.


21. Follow steps 19 and 20 to create the DTP for all the partitions. All the DTPs are created as shown below:


22. Click Create Process Chains to create the process chains  for all the partitions:


23. Select  first DTP and click Add as shown below:


24.  Follow the same procedure to add all the remaining DTPs in process chain.

25. Once all the DTPs are added , click Generate as shown below:


26.  After successful generation of process chain, we see the following screen:


27.  Click Process Chain Maintenance to display the generated process chain:


28.  We get the following process chain created:


Now run this process chain and check the data in each InfoCube:

Data in partition 1:


Data in partition 2:


Data in partition  3:


Data in partition 4:


Check the number of records in PSA.


We find that there are 10 records in PSA but depending on the geographical partition, only the relevant data is being loaded in each InfoCube.


1.      Report execution time will be tremendously decreased

2.      If error occurs in one partition, other partitions will be not affected and those will be available for reporting.

3.      No downtime is required for data load; data in each partitions can be loaded as required.

     SPO can be used in case of the creation of multiple InfoCubes based on ‘CALYEAR‘ also. Many projects need this scenario where year wise data is required to be analyzed.

You must be Logged on to comment or reply to a post.
    • Hi Ganesh.

      Thank you so much.

      Yes, it contains lots of screen shots. It consists so much tasks (From InfoProvider creation to Process chain creation) that I could not control this 😉 .

      Thanks a lot 🙂 .



    • Hi Elwanda,

      Really happy to see that you liked the document and its illustrations.

      Thank you so much for rating this document 🙂 .



  • Hi Nitesh,

    Very nicely documented the whole process .

    Just one question, as you mentioned in last line before benefits

    "We find that there are 10 records in PSA but depending on the geographical partition, only the relevant data is being loaded in each InfoCube."

    From In each InfoCube, are we using multiple cube ?I believe there is a single cube in which there are different partitions. Please correct me If I am wrong !

    Again wonderful document !



    • Hi Yogesh,

      First of all, thank you so much for reading this document so carefully.

      Yes, we have created a single SPO InfoCube. When we create SPO, it is a single object. In this example, ZGE_SL is the SPO InfoCube.

      When we create the partition, system automatically creates the partitioned InfoCubes depending on the number of partitions.

      Since we have created 4 partitions, we can see the 4 InfoCubes ZGE_SL01, ZGE_SL02, ZGE_SL03 and ZGE_SL04 which have been automatically created by the system.

      Please find below the screen shot:


      We can find above the design showing all the partitioned InfoCubes and related DTPs which have been created automatically at the time of partition and creation of process chain.

      Hope it clarifies your queries.

      Thanks a lot again 🙂 .



  • Hi Nitesh,

           Very  informative and good presentation. I'll definitely keep it handy and implement this when I get next chance.

    Thanks for sharing and appreciate your time and knowledge!



    • Dear Umashankar,

      Thank you so much for your kind and motivating words.

      A very big thanks for rating this document 🙂 .



  • Hi.very nice document.thank you.

    wh    What is different this approach from creation of multiple InfoCubes based on 'CALYEAR.which is good?

    • Hi Erdem,

      This is a new feature available from BI 7.3 version & above, whereas Info cubes based on CALYEAR can be done in any version.



  • Hi nitesh

    i am very happy to learn this spo

    i have a doubt...that is how can we create a SPO(CUBE) on top of the  SAP(DSO)

    ?  If it is possible then share the document. please give the reply for this



  • Hello Nitesh

    Thank you for the wonderful document with detailed approach.

    One question to you on this. We do have a requirement to implement the semantic partitioning on top of a standard DSO.

    So my approach would be an info source with semantic partitioning of cubes on top of DSO. Then finally can we use the SP-Cube as a feeder to BPC cube.

    Comments / suggestions please!!!




  • Hello Experts,

    Let me add one more line to my above post.I would like to add a BPC staging cube on top of the semantic CUBE so that the BPC staging data will be feeded to BPC model.

    Thank you