We overlook fundamental understanding of complex systems at our peril
Many drugs in use today are safe and effective, yet the precise mechanism by which they work is unknown. The procedure in progressing from discovery to approval involves trial-and-error, and a safe and successful molecule or protein may act in poorly understood ways. For example Aspirin was discovered in the late 1800s, but a convincing explanation for how it works did not come until 1995. We see a similar pattern evolving in the rapidly evolving domain of Artificial Intelligence (AI). Machine Learning is evolving as an area of great promise, yet the technology does not explain how results arise from observation.
Machine Learning works by identifying patterns in large data sets. In healthcare, one vaunted study showed that the technique could rapidly and accurately detect retinal disease. Machine Learning analysed a data set of high resolution 3D scans of the back of the eye across 1,000 patients whose outcomes were already known. The result was compared with the diagnosis from experienced opthalmologists and opticians, and showed an error rate of only 5.5%. It did not miss a single urgent case. However, Machine Learning cannot explain how this result was obtained – it yields only answers through statistical correlation.
If the Machine Learning answers are correct, or if Aspirin relieves pain with minimal side effects, do we really need to know how they actually work?
Today we see a proliferation of drugs with unknown mechanisms, each shown to be individually safe. But for people who take multiple medications we need to worry about drug interactions. Adverse events are discovered when drugs are already in the market where the combined effects of drugs working simultaneously can be dangerous and even life threatening. Yet we can only know of these adverse events post-approval; not knowing how drugs work means that we cannot predict bad interactions in advance.
Now consider an analogous development in Machine Learning, where causal mechanisms can be unknown. Such systems will not work in isolation. As AI systems proliferate, we will expect to see data produced by systems being consumed by others and a similar problem of Adverse Events will likely arise; instead of drug interactions we will see algorithm interactions. The interactions of Machine Learning “in the wild” promises to be unpredictable. We have already seen problems in financial markets, where advanced Machine Learning is already deployed. In 2010, a “flash crash” driven by algorithmic trading wiped more than a trillion dollars from the major US indices for 36 minutes before corrective action could be taken.
A further challenge will arise from the very fact that AI can make discovery faster and more accurate, questioning the need to do research that yield analytical insight. Theoreticians may be seen as superfluous and redundant, but we would jump to such conclusions at our peril. In our headlong rush to embrace the new and the promising, we need to bear in mind the law of unintended consequences and to prepare for the downsides. Investment in understanding mechanisms and causation needs increased investment to ensure that the full value of the exciting potential of AI is realized.