Numbers don’t lie. As it turns out, though, they do sometimes tell some very misleading stories.
Take the polls leading up to the recent US election. Pundits on the left and right each accused various pollsters of bias (it’s only bias when it favors the opposing candidate, of course) and, overall, many of the best-known polling firms produced dismally unreliable forecasts.
Are we now so polarized that even our polls can’t be trusted?
According to Nate Silver, author of The Signal and the Noise, and The FiveThirtyEight model and blog in the New York Times – and now the most famous statistician/math nerd in the country, given his dead-on accuracy in calling the election results – there were multiple flaws in many pollsters’ methodology, many of them a result of not recognizing changes in voters’ use of technology.
Huh? How could technology affect the results of public opinion polls?
Actually, some of the issues seem obvious, when you think about it. Traditional telephone surveys often call only land lines, not cellphones, which means those polls significantly undercounted younger, urban and minority voters, who are more likely to use cellphones exclusively. And all phone surveys, even those that include cellphone users, have a high miss rate. (How many calls did you ignore in the last few weeks?)
In fact, according to Mr. Silver, some of the most accurate results came from firms that conducted their surveys online, where reach and response rates were much higher.
Of course, this is a totally oversimplified view of Mr. Silver’s findings. But why am I even talking about this at all?
At SAPInsider, I did a short interview with Dave Hannon, and one of his questions was about the overall importance of change management as opposed to training alone.
I noted that training is simply one element of change management, then explained why a good change management program is a critical part of creating the broader adoption and enablement environment that enables the customer to achieve long-term goals. And that involves managing – and measuring – change.
But in the workplace, just like in politics, who and how we measure can affect the results we see. And what we “know” may not actually be true or complete or applicable. That’s why just looking at the system is not enough. Are we measuring the relevant things? Are we measuring accurately? Are we interpreting the data correctly? Are we rewarding the right things?
For example, measuring users’ proficiency on tasks or tasks they have been trained for shows only one small part of the picture. You have to understand both how and why users perform their tasks and interact with their tools in the ways that they do.
Taking into account “a day in the life” of the user is the only way to be sure to measure accurately, which is the only way to be able to provide the full spectrum of support that’s needed. As the song says, “and though the holes were rather small, they had to count them all.”
In other words, changing skills and behavior is key to success. (To hear a couple of real-life examples, including how McKesson’s award-winning program led to outstanding operational results – and translated into outstanding financial results – listen to the entire interview at http://bit.ly/11qNEBI.)
Inaccurate results won’t clear a path to improvement. In fact, if politicians are any indicator, inaccurate results don’t even help us come up with good excuses. A good change management program can help you get at the truth.