Diversity and inclusion are rightfully gaining traction in the private sector as business priorities, since different perspectives bring more rigorous and holistic decision-making that can better serve customers, employees and shareholders. In theory, the public sector, since it serves all citizens in the public interest, can and has been an early mover in this regard. US President Harry Truman’s decision to desegregate the military in 1948, and more recent Irish efforts to double the number of differently-abled persons in the public service, both provide examples of governments acting on their uniquely broad mandate to ensure inclusion.
Yet despite this trailblazing progress in the public sector, many civil services still do not truly resemble the societies they serve to the fullest extent possible. This is especially true in top positions, where diversity is arguably most important. In 2014, The Guardian reported woeful progress in diversifying the elite ranks of the British civil service.
I would speculate that this may particularly be the case around one likely underreported and tough-to-discuss factor: socioeconomic background. Talented individuals may not immediately be identifiable as members of a disadvantaged group, but their accents, hometown, tastes and other signals including their names may draw the prejudice of others in a way that harms their employment prospects.
A recent Australian piece got me thinking about this; it addressed an active (and laudable) British effort to better understand the social background of British public servants. The Australian author pointed out that top public servants in his own country were disproportionately white and male, but that as yet there is no way of knowing about the socioeconomic background of employees. I suspect this is the case in most countries.
This is one reason I find SAP’s recent launch of a bias-fighting hiring solution – an actual technology – so exciting. The solution will use machine learning to help shape job descriptions in a way that will both open the mind of the hiring manager (by forcing him or her to consider many forms of potential bias in language), and also the minds of job-seekers (“that sounds like a job someone like me could do”). With constant fine-tuning through such machine learning, I believe that this kind of technology can help us objectively fight for inclusion and overcome subtle unconscious biases which may exist alongside the best intentions.
Much of the discussion around this new tool addresses “business beyond bias,” but I think it will also help governments move beyond stubborn biases in civil service hiring practices. What do you think?