Skip to Content

How to improve performance of DP related processes

During support projects we face a lot of pain points where DP process, data extraction, time series objects, process chains take too much time to complete along with Live Cache inconsistencies. In this document I would like to present the necessary steps to improve performance of APO DP processes based on my experience.

Performance bottlenecks in APO Demand Planning (DP) can manifest themselves in a number of ways. End users may experience longer refresh times for planning book data. Such performance bottlenecks result in longer planning cycles and reduce DP’s effectiveness. Bad DP performance can have ripple effects.

As consultants we specialize in APO implementations, Bug fixes, Change Requests etc. It is also required to optimize our system for better performance. A similar way as we do in our personal computers.

Performance bottlenecks in APO Demand Planning (DP) can manifest themselves in a number of ways.

  • Planning preparation steps such as data loads into LiveCache, realignment runs, or background processes can take longer than expected.
  • End users may experience longer refresh times for planning book data.
  • Data loads from LiveCache into BW for reporting and backup purposes may take long hours. Such performance bottlenecks result in longer planning cycles and reduce DP’s effectiveness.

The following are a few thoughts identified to help mitigate sizing and performance obstacles:

  • Divide background processes into smaller subsets (new jobs) and run them in parallel to improve performance. Since the same objects / CVCs cannot be processed by the parallel processes due to locking problems, necessary steps to be taken while splitting the processes.
  • Adding Planning Versions dramatically increases size as it essentially “copies” all the Key Figures
  • Extracting data from LiveCache as few times as possible improves performance. Hence for reporting, only extract data that changes real-time and all calculated data ( after night batch process) to be extracted only once a day.
  • While extracting data from Planning area, please see this post “Planning Area datasource Parallel processing”
  • Using aggregates in DP increases performance. With aggregates, the system does not read all the CVCs, thereby enhancing the performance
  • While loading data from BW to APO, analyze how much data in the past do you require for planning and accordingly maintain selections while extracting. This could save a lot of time from your process chain run time
  • While working with background jobs in DP, it is recommend that the “Generate log” indicator is used in the planning job. However, you should delete log files on a regular basis because performance deteriorates as the log file size grows.

Check the log file size via the table /SAPAPO/LISLOG and Delete Old Logs via /SAPAPO/MC8K

  • Select internal SAP drill-down macro functions instead of manual drill-downs, because internal SAP drill-downs allow large data sets to process internally without having to transfer them to the front end for display.

In our project, we reduced the database size by 4 TB, by deleting the following logs on a regular basis (by putting them in a process chain)

  • Query statistics table
  • Statistics logging table
  • Aggregates/BIA Index process
  • BIA Access Counter
  • DTP Statistics

By using the below programs, we deleted the old data in the system

  • To delete Statistical data records we used Program RSDDSTAT_DATA_DELETE to delete the data older than 6 months.
  • To delete old entries from table DBTABLOG  we used  Program RSTBPDEL to delete the data older than 60 days.
  • To delete DTP Error logs we used Program RSBM_ERRORLOG_DELETE  to delete logs older than 60 days.
  • RSSTATMAN_CHECK_CONVERT_DTA 

      Usage of  reports RSSTATMAN_CHECK_CONVERT_DTA  and  RSSTATMAN_CHECK_CONVERT_PSA

This report build fast access tables (reports RSSTATMAN_CHECK_CONVERT_DTA and RSSTATMAN_CHECK_CONVERT_PSA)

To improve performance when displaying requests (in InfoProvider administration, for example) and loading data, in SAP NetWeaver 7.0, the administration information for requests is stored in special tables (RSSTATMANPART and RSSTATMANPARTT for InfoProviders and RSSTATMANPSA and RSSTATMANPSAT for PSA tables). This allows quicker access to the information. These quick access tables are part of the status administration that makes managing requests easier. The reports RSSTATMAN_CHECK_CONVERT_DTA and RSSTATMAN_CHECK_CONVERT_PSA write available request information for existing objects to the new tables to enable quicker access.

  • RSREQARCH_WRITE

  

To archive request older than 3 months

  • RSARCHD

To create delete jobs for ARCHIVE FILES of  valid  but incomplete archiving sessions for an archiving object.

Please provide me your valuable inputs and suggestions.

To report this post you need to login first.

11 Comments

You must be Logged on to comment or reply to a post.

  1. Borat S

    Good. You may like to append the following diagnostic tools to make it a more holistic document. This helps not just for DP performance but whole of APO. I compiled these from tons of documents, internet, sap, emails and guess work for a client who engaged me long ago for a completely unrelated job but got dragged into this.Much of this is generally in the context of Performance Optimization but may not be the ultimate solution if your design is bad at the application level itself.

    USEFUL REPORT PROGRAMS

    /SAPAPO/OM_PERFORMANCE Report “Performance Benchmark”


    /SAPAPO/OM_LVC_OBJECTS liveCache Sizing Verification


    /SAPAPO/OM_TS_FILLRATE liveCache Sizing Verification


    /SAPAPO/OM_REORG_DAILY – Reorg of Live cache objects – Scheuduled daily in prod


    ST03N- Work load Analysis. 5th column is the time spent in live cache. This could be good reassurance of where the problem lies


    LC10 – choose Console   Active Tasks or Runnable Tasks – shows currently active tasks in liveCache, or runnable tasks that are waiting for either liveCache processing time or a response from SCM/ABAP programs. See pic below


    SE30- Run time analysis


    ST05 – Trace analysis (also see live cache trace)




    RELEVANT NOTES


    719652 Up-to-date information on important SAP liveCache parameters. Changes in the hardware configuration of your SAP liveCache machine, such as additional RAM or CPUs, or changes in application data volumes or configuration may require different parameter settings. Check the above note regularly for updated parameter settings. If you experience performance issues, check your SAP liveCache settings against the latest recommendations in this note.


    039412 How many work processes to configure


    146289 Parameter Recommendations for 64-Bit SAP Kernel


    205220 Minimum size: MAXUSERTASKS in the liveCache


    208317 Performance problems in the liveCache


    337445 liveCache and storage management. Describes how to perform the calculation

    and to adjust the parameter OMS_HEAP_LIMIT.


    458369 Increase in liveCache due to increase in history


    537210 Determination of the liveCache main memory


    487972. Operating system parameterization of liveCache





    VERY IMP. LIVE CACHE PARAMETERS- GUIDELINES


    CACHE_SIZE: Roughly this parameter can be set as 50% of the available physical memory. The initial value for this parameter is usually defined during the sizing of your system, but it may require some tuning for normal operation, or in other situations, such as increasing the amount of RAM or data volume on your SAP liveCache server


    OMS_HEAP_LIMIT: Roughly this parameter can be set as 75% (64-bit Operating system)





    HOUSE KEEPING JOBS (check frequency from solution ops guides of SAP APO)


    RSLVCBACKUP – Back up of Live cache


    /SAPAPO/OM17


    /SAPAPO/OM_REORG_DAILY


    /SAPAPO/OM_LCAALERTS


    /SAPAPO/OM_LC_LOGGING_LOG_DEL


    /SAPAPO/TS_LCM_CONSCHECK


    /SAPAPO/PSTRU_TOOLS





    TRACE AND LOG FILES


    The knldiag file is saved to knldiag.old during a liveCache restart. For error analysis, it is important to save the knldiag files before they are overwritten on subsequent restarts of liveCache. Depending on the installation, you can find the knldiag file at operating system level in the directory /sapdb/data/wrk/<liveCacheName>.


    Another important log file is knldiag.err. All liveCache errors are recorded in this file. You can view this file from within LC10 (or at operating system level in the same directory as the knldiag file). This file is useful for liveCache error analysis.


    After every restart of your SCM System, check the initialization log of liveCache. You can do so in transaction LC10 (LC10   Monitoring.| Problem Analysis   Logs   Operating   Current). Again depending on your installation, you can find it at operating system level in the directory /sapdb/<liveCacheName>/db as file lcinit.log.


    For serious error analysis, you may need to use a kernel and/or LCA traces. Please only use these traces in coordination with SAP Active Global Support as they can heavily influence system performance.To turn on/off LCA traces, use transaction /SAPAPO/OM02. To view LCA trace files, use transaction /SAPAPO/OM01.




    MONITORS (available from SAP performance management guidelines document)… copyrights of SAP. I do not recall the actual doc version now. Generic is what it looks like



    Workload Monitor – ST03 N


    workload monitor.jpg


    Memory Monitor – ST02


    Memory Monitor.jpg



    OS Monitor – OS06 / 07


    OS Monitor.jpg


    Live Cache Monitor – LC10

    Live Cache Monitor.jpg


    (0) 

Leave a Reply