Everyone knows this question is the ‘kiss of death’ in the real world.  There is no good answer (especially when a woman asks the question). However, in the world of SAP, this can be a very helpful question to answer.  In fact, I believe a strong case can be made for comparing IT departments against one another.  This comparison process is called ‘benchmarking.’  In fact, the latest in benchmarking surveys for Centers of Excellence was the topic of session #6 in the ASUG CoE Webcast series.  Steve Wroblewski, from SAP’s IT Planning Group, outlined the method behind the Total Cost of Ownership benchmark survey and the updated RunSAP Like a Factory benchmark survey.  Mr. Wroblewski provided some insights as to how these surveys can help IT organizations identify areas ripe for process and cost improvements.


The Current Challenge

Over the last 10 years, the message to SAP organizations has been to:

  • Deliver faster
  • Create insights
  • Enable flexibility
  • Lower total cost of ownership

These themes have not (and will not) go away.  It is much more likely leadership has intensified these themes.  When an organization is faced with this task, there probably are a few areas which may be obvious from the very beginning.  But after these are addressed, what next? Where are the next places to look to improve performance?  How might performance be improved?  This situation is tailor-made for benchmarking surveys.  The two surveys most often used are the TCO Operations survey and the RunSAP Like a Factory survey.  The TCO survey focuses on some of the bigger questions as to overall cost performance.  This survey provides a comparison as to the cost of operating SAP, the mix of resources, and information on incident and change management performance.  This survey also provides a labor cost focus while secondarily providing a glimpse of the software environment.  While it does not provide a strong prediction of the specifics of operational excellence it is a good barometer of how well an organization stacks up against its peers.  The RunSAP Like a Factory survey, on the other hand, focuses on Operational Excellence and the overall adoption of Best Practices within the Center of Excellence.  This survey looks at three (3) primary areas:

  • IT Service Management
  • Business Process Operations
  • Technical/ Application Operations

Generally, this survey looks at the adoption of specific best practices and the overall importance of the practice to a company.  A comparison is then made to other companies.


So What’s the Big Deal?

When done well, benchmark surveys act as a treasure map for uncovering tangible savings and improvement opportunities.  I define ‘done well’ as spending the right amount of time and care to gather high quality data and to complete the survey. After all, the ‘garbage in, garbage out’ theory truly applies in this instance.  The other part of ‘done well’ is taking the time to dig into the results and identifying areas to address and actually doing it.  There can truly be benefit in doing the survey but effort needs to take place on the back end to make it worthwhile.  It will not accomplish anything by itself.  My experience is companies often complete the survey with non-validated data and half-heartedly look at the results and then dismiss the results because they do not place any faith in the underlying data.  There is no value in this.  When another survey is suggested, the response is typically, “we did it before and it wasn’t worth it.”  Of course it wasn’t worth it, because nothing was done with it.

Elements in each of the surveys (TCO Operations and RunSAP Like a Factory) should be used in tandem with each other to identify opportunities.  For example, if the line of ‘Internal FTEs per 100 Active Users’ shows a value higher than the average, there is an issue with the overall number of resources.  Why would that be?  Additional benchmarks on the distribution of the CoE roles may reveal that one group of resources (say, development resources) might have a significant variance to the peer group.  Further investigation reveals the complexity of the overall application as measured by the number of Y/Z objects and number of user exits are significantly above average. A correlation between the two measures is likely.  (This is a real-life scenario.  One customer I worked with had this exact situation.  After some investigation we discovered ta contributing factor was the inability of the organization to get quality business requirements at the beginning of a project.  As a result, as additional requirements came in, it was easier to build the code rather than alter the design.  Consequently, more developers were needed to maintain the application.  Subsequent attention to nailing down business requirements prior to development slowed the pace of development staff growth and the ratio began to move closer to the benchmark).

Another TCO metric might look at ‘Changes per 100 Active Users’.  If this metric differs significantly from average, it may indicate an environment where business requirements shift or a business organization that is not quite sure what is needed.  Either way, the fact there are a high number of changes made to the SAP application requires a greater degree of support.  Digging into this metric to better understand why will yield positive results.  If fewer changes require fewer resources and provide a more stable application, overall support costs will decline.

The RunSAP Like a Factory survey represents a much more diagnostic survey.  This survey looks at various best practices (actually, 133 of them) collected from customers over the last six years.  The survey respondent is asked to rate their level of adoption of the best practice (how well they do it) and also how important the best practice is to them. There are also performance metrics as part of this survey even though there is some overlap with the TCO Survey. The output provides a representation of how the customer’s responses rate against a peer group.  Again, there are insights that can be turned into real dollars through better performance.

My focus on this survey looks at the results in two ways.  First, I look at the places where the customer has a performance score that is noticeably different from the peer group.  In essence, this states that the company does not do something identified as a best practice.  The specific best practices are listed and now I can look at the practice and determine why / how I am not performing.  Immediately, I can begin to develop a plan to improve.  The other results I look at are the items where the current performance differs significantly from the level of importance placed on the activity (the ‘coverage’).  In essence, this says that something is important to the organization but the organization is not doing it well.  The next step is to begin to understand which of the best practices the organization should focus on in order to close that gap.

As I have outlined, these surveys can be valuable tools to highlight areas where performance improvements can / should be made.  Through a close analysis of the benchmark results, a dollar value can be attached to each improvement opportunity.  For example, in the case of the ‘Support FTEs per 100 Active Users’ metric, the difference between the respondent’s value and the Top Quartile value can be converted to a dollar value.  If the fully-loaded cost of an Internal FTE is $150,000 (salary + benefits + other), a benchmark value of 1.5 compared to a customer value of 3.2 could be worth nearly $250,000.  That may or may not seem like much but it is an amount of benefit that can be saved or invested elsewhere.


So Now What Do I Do?

Have you been convinced benchmarking is a valuable tool and you should get working on it?  (Maybe. Maybe not).  The best part about these surveys is they are absolutely free to conduct.  In today’s world, there are very few thing that are free … as in it costs you nothing. The surveys are located on SAP’s Value Management Center website (https://valuemanagement.sap.com/vlm/#).  A simple registration process is required and then a process to select and assign the particular survey to your account.  Once this is complete, each survey can be off-loaded to an Excel spreadsheet and filled out.  The data is then input into the survey, the specific inputs are validated, and a final report is generated.

When selecting a peer group it is helpful to consider an industry but it is not so helpful to look at annual revenue as a peer group.  Other helpful perspectives look at scale (# of users, geographic coverage) and learning curve (# of years on SAP).  By looking at these views, the results ca provide a relevant comparison to start.

One final piece of the puzzle is the length of time needed to complete the survey.  Typically, a couple of days are needed to gather and collect the necessary data.  In fact, Mr. Wroblewski stated that if the data cannot be collected in two (2) days’ time, this tends to be a finding that all is not well in the IT organization. When you combine free (as in, zero cost) with a minimum of 2 days effort and the return is significant, my question would be – “What are you waiting for?”

If you missed this webcast, the replace can be found here.  If you are interested in learning more about this topic, a full-day pre-conference session on Centers of Excellence will be held on May 16 in Orlando, FL prior to the first day of the ASUG/SAPPHIRE conference. Additional details can be found on the ASUG website (www.ASUG.com). In addition, if you are interested in topics relevant to Centers of Excellence, you are encouraged to join the ASUG CoE Community Group. 

If you have any questions regarding this webcast, please contact me at douglas.shuptar@sap.com.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply