The healthcare panopticon
The diagnosis of medical conditions is undergoing significant transformation as we move from a rule-based approach to an approach modeled on how the human brain detects patterns. This new approach exploits so-called neural networks to enable machine learning, as an article recently published in The New Yorker magazine described (April 3 2017).
Background
Consider one of the skills that almost every person acquires early in life, the skill required to ride a bicycle. We know that a bike is composed of two wheels, a frame, a saddle, handlebars and so on. Even a youngster can understand what these components do, but that same youngster has no way to connect the “what” with the “how” – it is almost impossible to imagine how this assemble of items can allow us to balance once we get on and start pedaling. But we very quickly learn how to do exactly that, usually with the help of a parent. However after we’ve mastered the basics, we instinctively learn many additional skills. For example the skill to turn a corner at low speed, which requires us to lean the opposite way. Or the skill needed to deal with slowing down when tackling a steep hill, where we lean forward. The “what” of the bicycle’s construction is of absolutely no help here – the “how” is accomplished by involuntarily learning to adapt as we try each manoeuvre.
Learning such skills, which are based on comparing and learning rather than working by rote through a series of rules, is the basis of the exciting new world of neural networks. Diagnosing medical conditions is one of the more promising area for the application of neural networks.
Diagnosis
Consider the life-threatening condition of stroke. The onsite of a stroke is determined when a radiologist examines a computed tomography (CT) scan of the brain. A significant challenge is time, since some part of the brain is dying with every minute that passes. When the victim is suffering from a stroke, the CT scan shows a hard-to-detect haziness of the crisp borders between the brain’s anatomical structures. All the radiologist sees is just a hint of something wrong – the premonition of a stroke – that you or I would likely not even notice. The radiologist has learned to identify this “hint of something awry” in a way that is very similar to how we all learn to ride a bike. Although there are rules to narrow down the diagnosis, the real art is not so straightforward. And the art of diagnosis is used across the spectrum, from diagnosing the reason for a cough, through differentiating skin cancer from rashes or acne to evaluating stroke.
A few years ago, to determine what part of the brain was used in this “art of diagnosis”, brain scans were done on radiologists themselves as they examined scans. The findings showed that the part of the brain used to make a diagnosis was the same as the part used in pattern-matching to recognize commonplace objects. Such pattern recognition allows us all, for example, to recognize a wolf – not by comparing it to (say) a dog – but simply by learning the pattern of what a wolf looks like. The question then arises: can a computer also analyse by pattern recognition rather than by applying rules? More intriguingly, can such a computer “grow and learn”?
Neural networks can learn
First generation rules-based systems have no in-built way to learn – a machine that has seen thousands of x-rays is no wiser than one that has seen a few. Moving from rules-based to the newer neural-network based diagnostic architecture allows learning. Neural networks mimic neural synapses in the brain, which are strengthened and weakened through repeated activation. In neural networks this behavior is simulated electronically by adjusting the weights of the electronic connections between nodes.
Testing neural networks’ ability to diagnose skin cancer, it has been found that they can be taught to distinguish cancer from benign conditions like a rash or a mole. The process starts by creating a “teaching set”, a large set of images of malignancy used to teach the machine what to look for. The machine knows nothing of the standard rules used to indicate malignancy, it is fed only the images. By testing itself against hundreds of thousands of classified images, the machine begins to create its own way to recognize cancer. Results compared to those of qualified dermatologists find that the machine’s estimate of probability of cancer is correct more times than that of the dermatologists. That is, the machine is less likely to miss melanoma. The strangest thing about all of this is that we cannot tell what the neural networks are picking up. All the internal adjustments happen away from our scrutiny; the network behaves as a black box. We can’t know how it determines its conclusion. And it can’t tell us.
The future
Such astonishing power offers some very compelling possibilities. Our cellphones could analyze shifting speech patterns to diagnose Alzheimer’s. A steering wheel could pick up the onset of Parkinson’s Disease through small hesitations and tremors. In advance of a visit to the specialist, an iPhone photo emailed to a powerful offsite network can screen to increase the reach and the effectiveness of the medical expert. Ultimately Big Data could watch, record and evaluate us almost like a diagnostic panopticon. Such a vision would extend the reach and capability of experts in the field, by taking care of the diagnostics while the experts attend to the human condition. Unlike human diagnosis, such systems learn from their mistakes and refine their technique, and they also accumulate knowledge that today is distributed among thousands of practitioners. The potential is astounding.
Neural networks move us from “knowing that” to “knowing how”, leaving the medical professional to focus on the “knowing why” – that is, to untangling the often very complex causes of a condition.
Now that is progress indeed.