An old dog learning new tricks – SAP education course ADM107 – Day 2 (of 2)
CPH! CPH! CPH!
ADM 107 – “SAP System Monitoring Using CCMS II” (where the “II” means 2, as in “I” is 1, in other words, ADM 106)
Since day 1 went well, it wasn’t surprising to have lowered expectations for day 2, particularly with much of the class hainvg already worked on 2 prior days of the ADM106 class. As has been typical of SAP and other training classes, students started heading out in the afternoon of the last day. This dynamic makes it challenging for the instructor to set the pace so that important material is covered early. In this case, Jeff moved through the majority of the Central Performance History (CPH) material, but held out the CPH to Business Intelligence (BI) topics in the “appendix” until late Friday afternoon. I held out with the other die-hards, but the run-through was completely theoretical by then.
The first topic was to monitor background jobs. I suppose the intent is to increase the usability of the base SAP scheduler, but we’re using an enterprise scheduler (CA AutoSys), and have been for over 8 years, so I wasn’t too impressed with the features provided. Further, the material clearly pre-dates the current SAP alliance with another 3rd party scheduler, so we only briefly touched on those capabilities. When I checked recently, there were over 70 products or versions of products certified compatible with the SAP scheduler API, meaning that customers have a wide range of choices in this space.
The second topic was log file monitoring. This feature appears to be powerful yet not as obtuse to configure as other components. My lab partner and I were able to get parts of this working, but not all. As he class moved on to other topics, I know this is a feature I need to prove in our environment.
Finally we were ready to set up the CPH repository. One question I asked and am still pondering is how to estimate and control space requirements for the data collected. During late afternoon lab work on day 1, I worked ahead and started collecting data on application server buffer values (hit ratio and swap to start).
Then we kicked off the data reorganization jobs. Note that the “quarter hour” job is listed as running every 1 hour and 45 minutes. I didn’t ask Jeff about this, but I suspect that someone mucked with the base frequencies.
Finally, 2 screen shots, one with a tabular report and the other with graphics. I had better charts with titles, etc., but this was the only one I captured for posterity.
The last unit we covered was alert management. Like Solution Manager, this is a capability added after many companies have implemented 3rd party tools. While the ability to centrally manage alerts is an improvement, it wasn’t clear whether multiple alerts could be combined (to prevent redundant messages), how to correlate different alerts for root cause analysis, and we didn’t drill into the alert console briefly mentioned in the course text.
Notes that Jeff referenced: