Does getting rid of ratings fix performance management? Reflections on Bill Kutik’s Firing Line interview with Josh Bersin
My job involves helping customers use cloud technology to support strategic HCM processes including performance management (PM). This technology is very flexible when it comes to PM process design. It can support processes that require managers to make extensive ratings or not give any ratings at all, conduct annual reviews or frequent, ongoing employee-manager check-ins, set detailed goal plans or not use goals, and so forth. The conversations I have with companies focus on determining what PM process will be most effective in their particular organization. This will ensure the technology is appropriately configured to meet their unique needs.
Experience working with hundreds of companies has taught me that PM methods that work very well in one company may not work in another. The best approach depends on the business needs and culture of the company, the nature of the employees and the jobs they are performing, and the skills and incentives given to managers. Companies benefit from thoughtful guidance to figure out what PM methods to use. And sweeping generalizations about certain PM methods being universally good or bad are almost always wrong.
Given what I do for a living, I was excited to listen to a recent Firing Line webcast where Bill Kutik interviewed Josh Bersin about innovations in HCM practices. Whenever I listen to Bill and Josh, I almost always find myself nodding my head in agreement with their insights and opinions. So I was surprised by several statements in the interview that struck me as highly questionable. Statements suggesting that:
- PM is primarily for “feedback, coaching and developmental planning”.
- PM ratings are not important because “managers and peers tend to know who the high performers are”
- Getting rid of PM ratings can lead to a “30% increase in retention”
- Eliminating PM ratings “always has a good result”
First, describing PM as mainly being a tool for “feedback, coaching and developmental planning” seems to ignore half the picture. The companies I work with use PM for two distinct but related activities:
Workforce Investment: making decisions about where to invest scarce resources such as pay, promotions, job assignments or training courses to maximize overall workforce productivity.
Workforce Development: providing coaching feedback and advice to increase individual employee performance.
Both activities require communicating performance expectations and assessing job performance. But how employees should be assessed is different depending on whether the focus is investment or development. Investment assessments involve comparing employees against one another to determine which employees should be given greater compensation, development resources, or promotion opportunities. In contrast, development assessments tend to stress qualitative descriptions of employee performance and often avoid comparing people against one another because such normative evaluations can hurt development. Creating a high performance workforce requires effective investment and development of the workforce ( Bloom & Van Reenen 2007). To be fully effective PM processes must improve both the effectiveness of pay and staffing decisions that affect employee careers and the quality of coaching conversations that affect employee growth.
It seems risky to dismiss the value of using PM to guide workforce investment because “managers and peers tend to know who the high performers are”. Well-designed PM processes communicate clear performance expectations, accurately measure people based on those expectations, and allocate investments based on the contributions people make to the organization. Is it wise to replace these methods with gut level, unchallenged opinions of managers and employees? Abandoning the use of consistent performance measurement processes and identifying high performers based on un-defined, un-validated manager opinions strikes me as akin to getting rid of school grades and determining the best students based on who has the most friends in the cafeteria.
I find it hard to believe that removing PM ratings always has a good result, nor do I believe that just getting rid of ratings by itself can create outcomes like a 30% increase in retention. I have seen effective and ineffective PM processes with and without manager ratings. The ratings are not the issue. The issue is the process used to make the ratings. If a company chooses to remove ratings from its PM process, it probably had a lousy PM rating process to begin with. And if you change a lousy process of course people are going to be happy about it! Furthermore, call me a skeptical industrial-organizational psychologist, but I cannot believe getting rid of a rating that only happens once a year can by itself drive a 30% increase in retention. Or if it does, then I suspect the people retained may not have been on the top of the performance distribution. A company might see a 30% retention increase during the time period when they removed ratings, but “correlation is not causation”. When companies overhaul their PM systems they usually make multiple changes related to compensation, company culture and manager training at the same time. One could argue that it is those other changes that affect retention, and not the simple act of removing a once-a-year rating.
In any case, companies can’t actually get rid of performance ratings. They can only shift and hide the rating process. Every company categorizes employees based on their perceived value to the organization, which is what it means to rate employees. If you pay some people more than others then you are rating them. I’ve yet to meet a CEO who did not want to know who the high performers are in the company. So the question is not whether you will rate employees, but whether the rating process is accurate, transparent and perceived to be fair. To achieve this goal many companies are doing things like replacing numerical ratings with more meaningful performance categories, improving their use of goals and competencies to clearly define performance expectations, and shifting the rating process from something done by managers working in isolation to something done through group discussions where managers meet with their peers to identify and agree upon the most valuable players in the company. But these companies are not “getting rid of ratings”. They are improving the rating process.
Many companies have terrible PM processes that deserve to be scrapped and totally rebuilt. I’ve yet to encounter a company that felt its PM methods are as effective as they should be. The good news is more companies are taking action to improve PM. This often means downplaying the role of annual ratings in PM and increasing focus on ongoing conversations. But it does not mean getting rid ofratings.
Improving PM is not so much about what you get rid of but what you create. These creations often include processes and training that encourage ongoing goal management and coaching so employees get meaningful feedback throughout the year. But they also include creating effective rating processes. Companies that want to truly fix PM are not ignoring the challenge of ratings. To the contrary, they are taking on this challenge by creating things such as clearly defined performance definitions and collaborative talent reviews that ensure investments in employees are based on the true value they provide to the organization and not simply on whether their manager likes them.
What do you think? How do you use ratings? What makes a rating process work? How does your company identify high performers? How does it ensure employees get effective feedback? Do you believe it is possible to get rid of ratings, and if so how do you make and explain decisions related to pay and staffing?
Thanks Steve. Companies are truly starting to get rid of ratings, changing the use of ratings, and changing the way ratings are developed. There are a lot of essential problems here: A) the rating itself is used for too much - comp, promotion, new assignment, etc. - when in fact it's only one number. B) research shows that managers are an "unreliable" source of ratings (Reinventing Performance Management - HBR) - 63% of the variation in ratings is due to the manager, not the employee. C) ratings typically follow a bell curve, which is a statistically flawed model of organizational performance (The Myth Of The Bell Curve: Look For The Hyper-Performers - Forbes ), and D) ratings create a negative relationship between manager and employee and tend to create neurological reactions which are negative (http://www.your-brain-at-work.com/files/NLJ_SCARFUS.pdf) - So there are a lot of challenges and changes afoot. Deloitte, Microsoft, Adobe, many others are now experimenting and most are doing fine without the old-fashioned model of ratings.
@Josh, thank you for continuing this conversation! As mentioned in my post, most often I find myself agreeing with you so I think this is probably a topic where the discussion is more about clarification than actual debate. So here’s a bit more clarification around my views on what is a very important but also somewhat complex topic.
First, it is important to define what we mean by “ratings”. When I say all companies rate employees whether they admit it or not, what I mean is all companies categorize employees based on perceived value to the organization. For example, if a company gives greater compensation increases to some employees over others then it is implicitly saying “we believe we get more value by spending money on one person versus someone else”. That is a form of rating. Any company that rewards some employees more than others in the form of pay, job assignments, or development opportunities is rating them. The question is whether the ratings reflect true employee value and whether employees understand the rating processes used to guide critical decisions that impact their careers.
Second, what we definitely agree on is that many companies are radically changing how they rate employees. I agree it is possible and often advantageous to eliminate ratings done independently by managers during the performance management process. What this does is shift the rating process to elsewhere in the organization. For example, removing ratings from the manager-employee process, and shifting it to collaborative talent reviews done in collaboration with other managers. But we haven’t gotten rid of ratings, we’ve just moved them to another process where they are likely to be more accurate and less detrimental.
Third and last, I want to comment on some of the citations you provided. I greatly appreciate and encourage the increased focus taken on performance management research over the last few years. But a lot of this research is being vastly over oversimplified and over interpreted. I tend to avoid arguments over scientific research and statistical methods outside of forums like SIOP because they can easily be counterproductive . But I do read the journals where much of this research is published and have noticed that what people say these studies have found is not always what they truly did find. We need more reviews like one recently written by Marc Effron that critically discuss the methodological and statistical detail of studies commonly cited in popular discussions of performance management. This article noted things such as:
“There’s no science that says that being rated automatically creates a negative response. Highly rated people or those rated consistent with their self-evaluation are likely to have either positive or neutral reactions. Even negative feedback is proven to be more acceptable when the source is credible, and the feedback high quality and delivered in a considerate way.”
“[The research article cited in the HBR article claiming “actual performance” has very little to do with a manager’s rating] had nothing to do with actual performance ratings or a real company’s performance management process! The research used development ratings from a Personnel Decisions International database to model what performance might be given various rating on a Profilor assessment tool.”
I encourage anyone interested in this topic to review the original article here
Evaluating the impact of performance management methods on employee behavior is far more subtle and complex than many pundits in this area might have us believe. In my own book I emphasize that there isn’t one best way to do performance management, there are only critical design questions one needs to carefully think through taking into account the company’s needs and culture. Even the much vilified “forced ranking” has been shown to be effective in certain situations . And almost none of the performance management research takes into account the impact of individual differences on reactions to performance management methods. For example, some people are highly motivated by competitive, evaluative performance processes even though others may find them distasteful. A question I wish every study of performance management would address is “does this process motivate higher performers more or less than lower performers”? By definition higher performers act differently than lower performers, so it seems reasonable to expect that they might react differently to performance rating processes.
I will concluded by re-emphasizing that I completely agree with you that it is time to do away with old fashioned ratings for many reasons. But we need to be thoughtful in what we replace them with. And “nothing” is not an option. Companies always have and always will rate employees. But it is long past time for companies to start rethinking and radically changing how they do it.