Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Applies to: SAP BW 7.3

Summary: This paper gives a complete conceptual idea about Semantic Partitioning Object in BW 7.3. Through this document we have tried to explain the benefits and implementation involved in modeling SPO.

Created By: Revathi Bonda

Created on:  5th Jan 2015

Author Bio: Revathi Bonda is currently working as SAP BW consultant.


Background

In SAP BW 7.3, SPO is exactly the same as the old semantically partitioning with the only difference that it does not allow you to cover separately exception cases in the loading process. This means that your transformation should have a unified logic. SPO-s are also quite flexible, because you are not restricted to use the whole object for further data loading or for analysis. For example, you can create a Multi Provider on top of only several partitions or load data to an Info cube from only one particular partition of the SPO. Now we can clearly say that SAP has done a great job to ensure a well performing data warehousing system. The only thing that we have to do is to use this in implementations and guarantee fast BI solutions, which require minimal administration efforts.


Purpose of Semantic Partitioning Object

Performance for both the loading and the subsequent reporting is becoming more and more crucial.  BW Consultants always pay attention to performance factors and proper model of dataflow, good multi-dimensional modeling of the data mart layer, parallelism in extraction and loading, etc.


Advantages of Semantic Partitioning Object

1) Performance increase over huge volume of data which runs for longer times for standard Data Store objects and Info Cubes.Semantic partitioning is modeled that the data sets are distributed over several data containers. This means that runtimes are kept short even if the data volume is large.

2) Error handling is better. If a request for a region ends with an error, for example, the entire Info Provider is unavailable for analysis and reporting. With a semantically partitioned object, the separation of the regions into different partitions means that only the region that caused the error is unavailable for data analysis.

3) EDW scenarios usually involve several time zones. With a semantically partitioned object, the time zones can be separated by the partitions. Data loading and administrative tasks can therefore be scheduled independently of the time zone.

4) You can use the semantically partitioned object for reporting and analysis, as you do with any other Info Provider. You can also choose to only update selected partitions in an Info Cube.


Scenario

The requirement is to partition the huge amount of data of the info providers based on characteristics to improve the performance of the data loading , reporting and latest data availability for end users even if the load failures occur in any info providers still others serve the purpose of reporting.


Key Points

  1. Delta Update is possible in Semantic partitioned Object. If the semantically partitioned object is made up of InfoCubes/DSOs.
  2. Deltas can be updated to the target Info Provider using DTPs without any restrictions. but the Vice versa is not possible.
  3. If the source is a semantically partitioned object that is made up of Data Store objects, only full DTPs can be created. If you want to update using deltas, you have to select the partitions of the semantically partitioned object as the source, rather than the semantically partitioned object itself.
  4. SPO objects can be included in a multi provider and Bex reporting can be done.
  5. If you update the entire semantically partitioned object to another Info Provider, the navigation attributes cannot be used in the analysis. This is because the Info Source that compiles the individual partitions for the update does not support navigation attributes. If you only update some of the partitions, this restriction does not apply.
  6. Semantic Partitioning can be done on already available/existing info providers (DSO/Cube). Prior partitioning we need to ensure data is dropped.
  7. If the target is a semantically partitioned object, you can perform the DTPs using the target semantically partitioned object's wizard. The source of the DTPs would then have to be the outbound Info Source of the source semantically partitioned object rather than the semantically partitioned object itself.
  8. Due to Partition Pruning, the data is processed quickly - with or without the use of a BW accelerator. With Partition Pruning, only those partitions are read that contain the data requested in the query.


Modeling

Implementation : Creating the new SPO infoprovider,refer below.


Step1:Create a infocube under your required infoarea and enable the semantically partitioned . (OR)

Here , we demonstrate to enable the semantically partitioned for existing cube( drop data before enabling the semantically partitioned option ,refer below.

Step2:

Maintainence of the semantic object.click on Change Object to add any dimensions and include the necessary infoobjects as below.

Step3: Maintain partitons based on the requirement of the business users using time characteristics /charatceristics.here we consider the calmonth for partitioning the data in cube from 2005 to 2014 data.

So click on the maintain partions and we get the below screen.choose the characteristic for partition and move to the right pane as below and
continue.

1stpartition is created by default as shown below. Specify the partition criteria.

For giving range ,uncheck the single value checkbox.

Click on Multiple partitons.the below screen appears .Here select the infoobject based on which parttions to be done.enter the details for  from and to which calmonth partition is needed and also give the number of months required for each parttion and click continue.

The below partitions are made from partition 1 to partition 31.

Step4:

Now start activation and the following logs are generated.

Step5: In further Option, Create transformation. The info source name will be system defined; we need to enter the data source name.

The following rules are generated and mapped.

Step6:

Create the DTP .

Create a folder in DTP templates for cube and Create New DTP template .

Now assign the DTPs for all the partition data sources by selecting all at a time.

DTPs are generated by clicking on Generate Button.

Step7: Create a process chain .

Click on Add DTPs to process chain by selecting the DTPS all at a time.


Click on generate to activate the process chain.

The process chain is created with all the DTPs of the SPO.

Now click on the process chain and maintain the variant and schedule the process chain .

We can change the order of the DTPS in process chain as well.

The process chain is activated.

Step 8:

The Semantic partitioned cube flow is created as below for 31 Partitions.

Repartitioning SPO

During repartitioning, the partitioning of an existing semantically partitioned object is changed. Partitioning comprises properties like the number of partitions,the logical display order of partitions and criteria and texts for the individual partitions. Repartitioning also includes creating new partitions.

Repartitioning if the object does not contain any data

If a semantically partitioned object does not contain any data, we can repartition it any way we like. We can simply change, save and activate the partition criteria.

Repartitioning if the object contains data

If a semantically partitioned object contains data, we cannot repartition it any way we like. Repartitioning is only possible if no data needs to be moved to different partitions and if no partial deletion of partition data is required.If these conditions are met, we can change the partition criteria of a semantically partitioned object and activate it. The system then automatically deletes the data that does not match the partition criteria any more.

We can thus use the semantically partitioned object for a rolling window scenario:Todo this, change the partition criteria and the text for the oldest partition.When we activate it, the system deletes the data from this partition, thus making it possible to load new data.

Automated Repartitioning

We can also use the automated repartitioning. We do not change the new partitioning properties manually any more.We use Business Add-In (BAdI) RSLPO_BADI_PARTITIONING which helps us to repartition semantically partitioned objects.We can use this BAdIs to automate repartitioning, meaning that we do not need to change partitioning properties manually any more. In this BAdI, we can implement the following properties: the number of partitions and their order , the partition criteria and the texts in various languages,and the DTPs to be generated for new partitions .

For the implementation of methods in the BAdI interface, we have the following options:

We can enter the partition criteria directly in the coding, calculate the partition criteria dynamically using the system data for example, read the partition criteria from wer own control tables, and adjusting the Semantically Partitioned Object Afterimplementing the BAdI, there are two BAdI programs as options for adjusting the semantically partitioned object .

  1. Automated using a program: We can regularly schedule program RSLPO_MASS_ACT via a processchain. Maintenance transaction and enter program RSLPO_MASS_ACT. The program checkswhich semantically partitioned objects that have been implemented for the BAdI  have to be adjusted. We have to select which objects should be adjusted. These are then adjusted and activated in thebackground. Program RSLPO_MASS_ACT has a transport connection.
  2. Program RSLPO_MASS_ACT_BDG can also be used to apply the BAdI implementation in process chains. In the program variant, specify which semantically partitioned objects we want to perform the BAdI adjustment for. If we do not enter anything here, all semantically partitioned objects with a BAdI implementation will be processed.For the selected semantically partitioned objects that have a BAdI
    implementation, the program checks whether adjustment is required, and performs the adjustment if this is the case. This process runs in the background.

SPO Changeability

SPO changeability should be enabled to ‘Allowed’ in production server for adding the no of partitions for scenarios ,if data in one partition has to be moved to different partition and partial deletion of data is required .

Converting Semantically Partitioned Objects to SAP HANA-Optimized Objects


If we are using an SAP HANA database and want to benefit from it when loading data into semantically partitioned objects that are based on Info Cubes, it is recommended converting existing semantically partitioned objects. Activation of standard Data Store objects is automatically optimized for SAP HANA. In the case of semantically partitioned objects that are based on Data Store objects, we therefore do not need to do anything.

Deleting data from SPO object

It was only possible to delete data within the SPO’s individual objects one by one.As this needs to be done one each and every object it can be tedious job for BW administrator .We can make an enhancement functioning we need to enable RSDAMIN table's parameter RS_SPO_DEL_ALL_DATA to value ’X’.


Related Content:

SAP Help Portal

SAP Community Network

SAP Support Portal

11 Comments
Labels in this area