Will machines replace HR and managers when it comes to decisions about people? That’s the question at the crux of the Fortune article published yesterday, Wall Street is Betting on Software to Find Top Talent.
Research shows that machine learning algorithms may learn and incorporate bias into their decision-making processes. They help make better decisions, in part, because of the large amounts of data that can be extracted – from content provided around decisions that were made in the past to the outcomes that resulted. Each transaction provides a historical map from which better decisions can be made. But the truth is, when it comes to people at work and many human capital decisions, there is no cookie cutter approach. And those past decisions may not form the best map to read from when informing future outcomes.
Algorithms are susceptible to the same biases as the humans that program them, particularly when you consider the nature of the data driving them. If financial institutions (as mentioned in the Fortune article) program in past markers of success, they are likely programming in biased decisions made around performance ratings, developmental opportunities, rewards and promotions. A quick look at the demographic characteristics of those at the top level of most financial institutions reveals the historic bias of these decisions. So while machine learning algorithms are mining data to extract the characteristics and commonalities that drive top performance, they may also be finding superfluous, superficial commonalities that have little to do with performance and serve only to exclude certain groups that don’t fit the mold. Just because your existing top performers share something in common, this does not mean that thing is desirable, or that it makes sense to continue selecting talent based on that thing. Unlike humans who can, with attention and intention, mitigate the biases that color their decision making, machines cannot correct themselves.
So how do we balance the desire for faster, more automated hiring that is necessary in an increasingly fast-paced world of work with the need to make selection decisions based on the right things? The answer is just that: Balance. There is a significant opportunity for machine learning algorithms to inform and improve, but not totally replace, human decision-making. If we consider the output from these programs as just one of several important data points in the hiring process, we strengthen our ability to make decisions based on the criteria our businesses need rather than the characteristics we implicitly identify with “success”. Machines and data can be our allies in building more inclusive, innovative workforces, but if we use them as the ultimate deciders we simply transfer bias from ourselves to the tools we create.