I joined CLC Metrics (a joint venture partnership between the Corporate Leadership Council and Infohrm) almost nine years ago. Hired as a strategic consultant specializing in workforce analytics, my first ever customer workshop did not end well – upon its conclusion, our team’s 75-minute flight home was cancelled due to fog, so we rented a car and drove six hours back to Washington, DC.
The workshop itself was uneventful, save for an insight into the unusual world of performance management. We were discussing how the client would collect data to gauge the effectiveness of the performance review process. It soon became clear that the results would be skewed. As the seniormost attendee explained, “executives hired at pay grade X (pseudonym) and above cannot ever receive a performance rating below ‘meets expectations’”. Translation: from Day One, every leader, no matter how new to the role or how many goals were missed, would be in no danger of a receiving an unsatisfactory performance review. Oddly enough, the performance review was not about reviewing performance, it was about justifying compensation decisions that had already been made.
Performance scores should provide vital quantitative inputs to succession management discussions; such processes would no doubt be hindered by a lack of differentiation – “well, Ed is clearly ready for the next step because he’s meeting expectations, just like everyone else.” Being able to distinguish between high and low performers and the goals they achieve is one way of measuring leadership impact.
With each post, I have sought the input of peers who approach the terrain from different angles – practitioners, researchers, consultants – so that I might uncover new ideas (and, at times, pass them off as my own!). In this case, I spoke to Steve Hunt, Vice President of Customer Research at SuccessFactors and a thought leader with 20+ years of experience in the talent management space, as well as former Infohrm & SuccessFactors colleague, Darren Shearer, who, for many years, ran our benchmarking and reporting program. In addition, Boris Snitkovskiy, a Senior Associate at PwC Saratoga, and I recently discussed performance analytics, which you’ll see reflected towards the end of the blog.
1. What’s conventional practice for using data to measure the impact of performance management?
Most HR/Talent Management strategic plans include some reference to “creating a high-performing workforce” – otherwise, why bother investing in your people?
Measuring this often begins with compliance – how many of your staff who were eligible for a performance review actually received one? You can argue about the semantics of “receiving” – in many instances, a score might be recorded but the feedback never actually delivered to the employee. But what is important is that the performance of people was actually reviewed. It is not uncommon to meet people who have worked for years and never had their performance ever formally assessed or documented. From a systems perspective, the company is paying these individuals but doesn’t know if they are doing the work for which they are being paid.
From there, practitioners might move into simple, yet insightful, measures of the process in aggregate. One of SuccessFactors’ retail customers, who was previously limited in the metrics they had access to, created a 1-page “Talent Index” for review at business strategy meetings. The index prefaced the metrics with four questions:
1. Are our most solid performers staying with the company? (High Performer Turnover Rate)
2. Are we growing capacity for high performance in our workforce? (High Performer Growth Rate)
3. Are we able to turn-around performers needing improvement? (Associate Turnaround)
4. Are new people doing well? (New Hire Performance)
HR used the Index to build greater visibility into talent strategy – facilitating a data-driven discussion of performance scores at company-wide management meetings – and solicit ideas on related metrics to bring to future conversations. And simply asking questions like “are solid performers staying with the company” creates good conversation around what makes an employee a solid performer.
2. What are some barriers to the utilization of performance metrics?
With reference to the metrics above, as Darren points out, there are some potential risks – is having too many high performers (question #2) a flaw of the performance review process, and to what extent are efforts being extended on “turning around” poor performers (question #3) versus cutting loose staff who are not a good fit?
Otherwise, two general hurdles to performance metrics utilization are:
a. Grade Inflation – In his benchmarking research with Infohrm, Darren Shearer found that the Staffing Rate – High-Performers (defined as the percent of the workforce classified as high-performing) across all firms included in the benchmark dataset increased by 67 percent over a six-year period. That is an amazing statistic and harks back to my opening example of the customer workshop.
Not only is the grade inflation itself an issue (for data accuracy, fairness of the process, and ability to motivate employees used to being categorized as stars), but imagine the financial cost to the average organization. One such firm with whom Darren was familiar employed 10,000 staff, and, in three years, moved their Staffing Rate – High Performers from 30 percent of the workforce to 65 percent (all numbers rounded). Top performers were eligible for a 10 percent bonus on their base salary, as opposed to three percent for Mid-Performers. Assuming the additional High-Performing employees were previously Mid-Performers and who now earned $60,000, this equates to $14.7M in additional bonus payments!
According to Darren, “understanding the drivers of high-performer inflation is the first step to improving the validity of performance data. For many businesses, it can be a struggle to get people to comfortably accept that they fall into the “meets expectations” category. Being average is increasingly becoming a stigma as companies push their workers to become high-performing individuals in a high-performing organization.”
Steve’s perspective on how the rating scales have become corrupted is that “many organizations have a pay-for-tenure, rather than pay-for-performance, culture” – employees receive proportionally higher pay based on years of experience rather than truly exceptional performance. One of the reasons for this is that many companies have failed to clarify what types of behaviors and goals define high performance. Tenure is easy to measure so raters over-rely on it, even though it often has a very weak relationship to actual performance. But paying for performance requires actually defining performance, and this takes work that many companies for some reason are unwilling to do despite the obvious and massive benefits it generates”.
The obvious consequence can be that managers “spread the money around” rather than awarding top performers their due credit. Performance management, therefore, becomes a compensation allocation, rather than impact evaluation, process.
b. Under-Utilization of the Data They Do Have – Consequently, according to Steve, “if people don’t trust the quality of data, they don’t use it and it doesn’t get better.” Instead, if managers use performance data to make decisions, people will get more serious about the quality of it.
Steve continues, “I’m always baffled by managers who say ‘we shouldn’t use performance management data because it isn’t accurate’ when this data is often generated by these same managers. If companies want to give managers better insights based on performance management data then they need to get managers to provide better quality data into the performance management process.”
Based on Steve’s experience, to be more effective, firms need clear criteria – measuring employees’ performance to give feedback (coaching) and reward behavior (pay, promotions) – based upon strong competency models and goals (to measure “what” and “how”) with apples-to-apples calibration.
3. What might be examples of foundational metrics to apply to performance management?
- Employee Upgrade Rate
- High Performer Growth Rate
- High-Performer Retention Rate
- High to Low Performer Ratio
- Sustained High Performer Rate
- Performance-Based Pay Differentials
- Return on Human Investment Ratio
- % of Goals Achieved
For a discussion of additional performance metrics, read Cathy Missildine’s blog on Employee Performance Data: The Most Underused Data Set in HR.
4. What about more advanced analytics?
Last year, a customer asked me to put together a list of “workforce metrics for a high-performing company”.
One such example on my list was “Organizational Agility”, a concept which former SAP/SuccessFactors colleagues have defined as “faster response times to capitalize on market opportunities” (Regan Klein) and “the capacity to react to both internal and external forces” (Ray Rivera).
For my part, I suggested a composite index of metrics that would enable firms to measure their level of agility, drawing upon of performance data (familiarity with new experiences, qualifications, time-to-productivity, etc.), personal attributes (flexibility, out-of-the-box thinking) and career mobility (promotions, transfers, ability to relocate, etc.).
Thus, performance metrics would be folded into an output measure that might better reflect how agile the individual/organization is. I would be very interested to see research that points to correlations between organizational agility and firm performance. If you have seen such data, please let me know.
Boris Snitkovskiy of PwC Saratoga provides his perspective on more advanced analytics: “There’s a lot of potential to use advanced analytics to predict employee performance and thereby business performance. However, two things are important here. First, you can only apply advanced modeling techniques when you have a robust measure of employee performance. While indexing subjective and objective indicators is most suitable here, these “outcome” measures are likely to be different by job family or seniority level. The second is using a hypotheses-driven approach to validate which factors are correlated with, rather than drive, performance outcomes – spurious correlations may not help generate the correct interventions.”
Boris shared an example of a firm that wanted to understand which characteristics and behaviors make some sales people more effective than others. They have three sales divisions, each of which consists of approximately 50 employees. Despite a small sales force, the company had rich data on individual demographics, skills, performance, and employee engagement archived in their databases. This enabled the company and PwC to build models that isolated which skills best predicted exceptional performance, the cultivation of which could lead to as much as an 11 percent increase in total sales.
As you might expect, factors driving performance will differ from company to company, so what’s important here is setting up the right question, determining how broad your dataset will be, and getting managers to commit to actions based on the results of the information provided.
5. Any final words of advice?
I’ll close with Darren’s view on publishing performance data:
“In a perfect world, performance feedback is provided as often as necessary and close to the moment, and data updates on progress occur at least quarterly in order to give the most current and actionable perspective on individual and aggregate performance. Since it’s not a perfect world, even a quarterly pulse check with an overall ratings update will be valuable.”
Even if the data isn’t perfect, it can still be of tremendous value in assessing individual contributions and enterprise performance.
In my next blog, I’ll take a look at Succession Analytics. Happy Thanksgiving to those of you in the U.S.