Skip to Content
Product Information

BWoH Optimization – Techniques

 

The purpose of this article is to introduce the concepts of BWoH data reduction and optimization. Readers who want to get a better understanding of SAP BWoH data reduction and optimization in particular, and what it can do for its customers will be benefited from this article. It gives insight into the BWoH data reduction and optimization process which can be used for reducing the HANA In Memory foot print for the better performance and effective utilization.

Following activities can be performed in order to achieve above objective.

  • Data Flow Optimization: Identifying redundant data flows and redesigning them by removing redundant objects (DSO/Cubes).
  • Housekeeping: Periodic maintenance and cleanup of the system log/error tables which can grow larger over a period of time.

Data Flows Optimization – Solution Approach 

  • Find the list of all DSOs and Cubes using metadata tables RSDCUBE and RSDODSO.
  • Determine the size of all DSOs and Cubes and sort them in descending order of size:

We can use below SQL to get top 100 DSOs and Cube size:

To get the list of Cubes by size:

Select top 100 table_name, round(sum(memory_size_in_total/1024/1024/1024),2) as “MEM(GB)”, round(sum(estimated_max_memory_size_in_total/1024/1024/1024),2) as “MAXMEM(GB)”, sum(record_count) from “PUBLIC”.”M_CS_TABLES” WHERE TABLE_NAME LIKE /BI%/F%’ group by table_name order by “MEM(GB)” desc

To get the list of DSO’s by size:

Select top 100 table_name, round(sum(memory_size_in_total/1024/1024/1024),2) as “MEM(GB)”, round(sum(estimated_max_memory_size_in_total/1024/1024/1024),2) as MAXMEM(GB)”, sum(record_count) from “PUBLIC”.”M_CS_TABLES” WHERE TABLE_NAME LIKE ‘/BI%/A%’ group by table_name order by “MEM(GB)” desc

  • For a quick analysis, you can also get the large DSO/Cubes from Early watch alert or transaction ST14.
  • For each object:
    • Find the where use list.
    • Find the lookups (We had a lookup finder utility built in the system for this purpose).
    • Check if it is a redundant object i.e. multiple layer of data flows down and upwards without any specific logic in between.
    • Check if there is any report and if the report is in use. (Use table RSZCOMPDIR-LastUsedDate).
    • Check if it is used in any APD. (Use table RSANT_PROCESSI- OBJNM for this purpose).

Housekeeping actvity – PSA/Change Log Deletion:

  • Find the list of all change log and PSAs and order them in descending order of size using below SQL:

Select top 200 table_name, round(sum(memory_size_in_total/1024/1024/1024),2) as “MEM(GB)”, round(sum(estimated_max_memory_size_in_total/1024/1024/1024),2) as  “MAXMEM(GB)”, sum(record_count) from “PUBLIC”.”M_CS_TABLES” WHERE TABLE_NAME LIKE ‘/BIC/B%’ group by table_name order by “MEM(GB)” desc

  • Find the corresponding DSOs/Data sources name using RSTSODS table.
  • Analyze if the change log or PSA is getting deleted or not. If not then need to find out the reason. It may be mostly because of below reasons:
    • Its deletion is not yet scheduled: Include them in PSA/change log deletion process chain and automate it.
    • Data mart status is pending: May be we have stopped loading one of the delta data target so its data mart is pending and because of this the PSA/change log is not getting deleted. Fetch the delta for those target using dummy selections and update the data mart regularly so that PSA/change log is getting cleaned up regularly.
    • Find out the DSOs for which we have only full data target or no upward data target – Include them in process chain with only 3rd option selected (Delete activation requests).

SQL Statements which can be used during analysis:

To check size by host:

select host, round(sum(memory_size_in_total/1024/1024/1024),1) as “MEM(GB)”, round(sum(estimated_max_memory_size_in_total/1024/1024/1024),1) as MAXMEM(GB)”, sum(record_count) from “PUBLIC”.”M_CS_TABLES” group by host order by “MAXMEM(GB)” desc 

To check the size by table name:

select table_name, round(sum(memory_size_in_total/1024/1024/1024),1) as “MEM(GB)”, round(sum(estimated_max_memory_size_in_total/1024/1024/1024),1) as MAXMEM(GB)”, sum(record_count) from “PUBLIC”.”M_CS_TABLES” where table_name like ‘/BIC/B00%’ group by table_name order by “MAXMEM(GB)” desc 

To check number of requests in each Infoprovider: 

select Top 100 ICUBE, COUNT(RNR) from “SAPBWP”.”RSICCONT” group by ICUBE ORDER BY COUNT(RNR) DESC 

To check the mapping in transformation: 

select tranid, AGGR, (select SOURCENAME from RSTRAN Y where y.OBJVERS = ‘A’ and y.tranid = x.tranid) as SOURCENAME, (select TARGETNAME from RSTRAN Y where y.OBJVERS = ‘A’ and y.tranid = x.tranid)as TARGETNAME, (select SOURCETYPE from RSTRAN Y where y.OBJVERS = ‘A’ and y.tranid = x.tranid)as SOURCETYPE, (select TARGETTYPE from RSTRAN Y where y.OBJVERS = ‘A’ and y.tranid = x.tranid)as TARGETTYPE from rstranrule X where objvers = ‘A’ group by x.tranid, x.aggr

To the check the last Dataload timestamp in each Infoprovider: 

SELECT icube, max(o.memory_size_in_total),max(timestamp) FROM  “PUBLIC”.”M_CS_TABLES” o ,rsiccont C WHERE  substring(o.table_name,7,length(o.table_name))=C.icube or substring(o.table_name,7,length(o.table_name))=concat(C.icube,’00’) or substring(o.table_name,8,length(o.table_name))=concat(C.icube,’00’) group by icube

To check Query last used date:

SELECT c.INFOCUBE, c.COMPID, b.LASTUSED FROM  RSZCOMPDIR b, RSRREPDIR c WHERE  b.COMPUID = c.COMPUID and b.objvers = ‘A’ and  c.objvers = ‘A’ and c.COMPTYPE = ‘REP’

To check number of DTP selections by Infoprovider: 

select COUNT(REQUID) as no_of_selections, REQUID, (select SRC from RSBKREQUEST Y where Y.REQUID = X.REQUID) as source, (select TGT from RSBKREQUEST Y where Y.REQUID = X.REQUID) as target from “SAPBWP”.”RSBKSELECT” X group by REQUID ORDER BY COUNT(REQUID) DESC

Conclusion

The standard SAP delivered maintenance activities do not clean all the data in PSA/ Changelog tables, it need additional  analysis to clean up these tables efficiently.  Above steps define the procedure to clean up the PSA/ Changelog tables more effectively. Also the scrips helps in analysis.

Learning Points

  • Data flow Optimization
  • PSA/Change Log Deletion

Related Content

 

 

 

Be the first to leave a comment
You must be Logged on to comment or reply to a post.