Supply Chain Management Blogs by Members
Learn about SAP SCM software from firsthand experiences of community members. Share your own post and join the conversation about supply chain management.
cancel
Showing results for 
Search instead for 
Did you mean: 
jagannadhb
Active Participant

How to improve performance of DP related processes

During support projects we face a lot of pain points where DP process, data extraction, time series objects, process chains take too much time to complete along with Live Cache inconsistencies. In this document I would like to present the necessary steps to improve performance of APO DP processes based on my experience.

Performance bottlenecks in APO Demand Planning (DP) can manifest themselves in a number of ways. End users may experience longer refresh times for planning book data. Such performance bottlenecks result in longer planning cycles and reduce DP's effectiveness. Bad DP performance can have ripple effects.

As consultants we specialize in APO implementations, Bug fixes, Change Requests etc. It is also required to optimize our system for better performance. A similar way as we do in our personal computers.

Performance bottlenecks in APO Demand Planning (DP) can manifest themselves in a number of ways.

  • Planning preparation steps such as data loads into LiveCache, realignment runs, or background processes can take longer than expected.
  • End users may experience longer refresh times for planning book data.
  • Data loads from LiveCache into BW for reporting and backup purposes may take long hours. Such performance bottlenecks result in longer planning cycles and reduce DP's effectiveness.

The following are a few thoughts identified to help mitigate sizing and performance obstacles:

  • Divide background processes into smaller subsets (new jobs) and run them in parallel to improve performance. Since the same objects / CVCs cannot be processed by the parallel processes due to locking problems, necessary steps to be taken while splitting the processes.
  • Adding Planning Versions dramatically increases size as it essentially “copies” all the Key Figures
  • Extracting data from LiveCache as few times as possible improves performance. Hence for reporting, only extract data that changes real-time and all calculated data ( after night batch process) to be extracted only once a day.
  • While extracting data from Planning area, please see this post “Planning Area datasource Parallel processing”
  • Using aggregates in DP increases performance. With aggregates, the system does not read all the CVCs, thereby enhancing the performance
  • While loading data from BW to APO, analyze how much data in the past do you require for planning and accordingly maintain selections while extracting. This could save a lot of time from your process chain run time
  • While working with background jobs in DP, it is recommend that the “Generate log” indicator is used in the planning job. However, you should delete log files on a regular basis because performance deteriorates as the log file size grows.

Check the log file size via the table /SAPAPO/LISLOG and Delete Old Logs via /SAPAPO/MC8K

  • Select internal SAP drill-down macro functions instead of manual drill-downs, because internal SAP drill-downs allow large data sets to process internally without having to transfer them to the front end for display.

In our project, we reduced the database size by 4 TB, by deleting the following logs on a regular basis (by putting them in a process chain)

  • Query statistics table
  • Statistics logging table
  • Aggregates/BIA Index process
  • BIA Access Counter
  • DTP Statistics

By using the below programs, we deleted the old data in the system

  • To delete Statistical data records we used Program RSDDSTAT_DATA_DELETE to delete the data older than 6 months.
  • To delete old entries from table DBTABLOG  we used  Program RSTBPDEL to delete the data older than 60 days.
  • To delete DTP Error logs we used Program RSBM_ERRORLOG_DELETE  to delete logs older than 60 days.
  • RSSTATMAN_CHECK_CONVERT_DTA 

      Usage of  reports RSSTATMAN_CHECK_CONVERT_DTA  and  RSSTATMAN_CHECK_CONVERT_PSA

This report build fast access tables (reports RSSTATMAN_CHECK_CONVERT_DTA and RSSTATMAN_CHECK_CONVERT_PSA)

To improve performance when displaying requests (in InfoProvider administration, for example) and loading data, in SAP NetWeaver 7.0, the administration information for requests is stored in special tables (RSSTATMANPART and RSSTATMANPARTT for InfoProviders and RSSTATMANPSA and RSSTATMANPSAT for PSA tables). This allows quicker access to the information. These quick access tables are part of the status administration that makes managing requests easier. The reports RSSTATMAN_CHECK_CONVERT_DTA and RSSTATMAN_CHECK_CONVERT_PSA write available request information for existing objects to the new tables to enable quicker access.

  • RSREQARCH_WRITE

  

To archive request older than 3 months

  • RSARCHD

To create delete jobs for ARCHIVE FILES of  valid  but incomplete archiving sessions for an archiving object.

Please provide me your valuable inputs and suggestions.

11 Comments
Labels in this area