Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
Doug
Advisor
Advisor
In a 1970 Life Magazine article, Marvin Minsky,  cofounder of the MIT Artificial Intelligence Lab predicted, “In three to eight years we will have a machine with the general intelligence of an average human being.”  Forty-seven years later and I can’t say to my Amazon Echo “Alexa, Damage Report” and learn all the things that my children have screwed up since the last time I asked. While we’ve made significant progress, most artificial intelligence (AI) systems struggle with tasks that your average four-year-old has mastered (like basic grammar).  How could Minsky, who had developed self-propagating neural networks, be so wrong?

Historians will point to significant limits in computer power, the challenges associated with a combinatorial explosion, and the difficulty of computer science to develop new approaches to knowledge representation. All of these are reasonable explanations for why, decades later, we’re only beginning to enter the age of Naïve Artificial Intelligence, but still far away from a machine with the average intelligence of a human being. Explaining how Minsky and others could be so wrong requires the red-headed stepchild of the sciences, psychology.

The Import of Social and Cognitive Psychology to AI

Despite my lineage, (my last name is Freud, after all) Sigmund Freud’s theories have no relevance and little support as the prevailing paradigm within the psychological community. Social and cognitive psychology is where the most interesting academic research is taking place, and in order for AI to advance, insights from these disciplines will be critical.

Daniel Kahneman, along with Amos Tversky, are perhaps the most important psychologists of the last 150 years and their work spawned a revolution that led to the widespread adoption of “behavioral economics.” I would summarize their findings by stating—humans are flawed information processors and we use a variety of heuristics that lead to significant errors in judgment and decision making. Even the smartest amongst us like Minsky are subject to these errors.

You Just Can’t Fix Stupid…And Here’s Why

Daniel Kahneman, in his book Thinking Fast and Slow, summarizes a lifetime of research and provides a simplified framework for how the brain works.  He proposes that there are two systems:

  • System 1 is automatic and processes information quickly so that we make judgments and decisions at a real-time pace. We intuitively make decisions quickly processing the data we encounter, and are always able to make judgments even when there is incomplete information or significant uncertainty. When System 1 classifies something as “abnormal” or “surprising,” System 2 activates.

  • System 2 requires effort and attention, and it’s only then that we use algorithms and processes to slowly embark on a knowledge discovery process (which if done by humans as smart as Minsky can lead to first principle scientific discovery).  Before we can activate System 2, we’re subject to the error caused by the machinery of System 1, and the bias introduced by the use of heuristics is why we’re all naturally stupid.




Heuristics and Biases 

In 1974, Kahneman and Tversky published an article in “Science,” volume 185 entitled “Judgment under Uncertainty: Heuristics and Biases” where they describe how decisions based on beliefs concerning the likelihood of uncertain events are made (like election outcomes, guilt of a defendant). We can think of a Heuristic as “a judgmental short cut that gets us where we typically need to go quickly, but at the expense of introducing bias or error.”  This article describes three such heuristics:

  • Representativeness is "the degree to which an event is similar in essential characteristics to its parent population and (ii) reflects the salient features of the process by which it is generated."When people rely on representativeness to make judgments, they’re likely to judge incorrectly.  Simply the fact that something is more representative doesn’t actually make it more likely.

  • Availability relies on immediate examples that come to a given person's mind when evaluating a specific topic, concept, method, or decision. Thisheuristic operates on the notion that if “it” can be recalled, it must be critical compared to alternative solutions/explanations. Availability causes us to rely too heavily on more recent/available information.

  • Anchoring and adjustment describes the inclination to rely too heavily on the first piece of information offered (the "anchor") when making decisions. Once set there is a bias/error toward interpreting other information around the anchor.




In addition, there are other heuristics that System 1 uses which lead to error in judgment. The Substitution heuristic is where System 1 will assign subjective probabilities by answering a different/simpler question, which of course leads to bias and error in the interest of speed and convenience. It is beyond the scope of this blog to go through all the heuristics in System 1, which lead us to be naturally stupid despite the best efforts of System 2.



Exceptions to the Rule

One should acknowledge that there is significant individual differences in knowledge, skills, and abilities. There are, for example Grand Masters at Chess who can play 20 games simultaneously, and based on their intelligence and time on task (practice) turn what for most humans is a System 2 process into a System 1 task. Even Poker, which is significantly more random, has enough regularity that a clever human can implement a set of automated rules that over time will enable them to succeed.

There are many disciplines where experts can’t predict any better than chance. For example, think of the 2016 US presidential election where almost every pundit predicted a Hillary Clinton victory. Even data scientist Nate Silver, who uses an incredibly sophisticated approach, was unable to predict the outcome. His System 2 capabilities are substantive, but the error in System 1 judgments led to mistakes that surface in System 2 problem solving.  Human predictions fail when there are not enough feedback loops, when the world is so chaotic that nothing is predictable, and when there are weak correlations between features and outcomes.    

Artificially Intelligent

Since our internal cognitive machinery is prone to error and our ability to change it is limited, the next major leap in knowledge discovery will come via automation in machine learning and other AI technologies. In my next blog, we’ll explore the issues associated with the adoption of AI into our work and professional lives.

For More Information