Is There a Limit to AI Diagnosis Accuracy?
NOV 01, 2017 00:34 AM
A+ A A-

Is There a Limit to AI Diagnosis Accuracy?

 
by Larry Alton
 
For decades now, artificial intelligence (AI) researchers have been working to develop smarter algorithms that can detect and describe problems based on patient data, such as medical history and current symptoms. Now, the medical field has many AI diagnosis tools to choose from, and they’re getting better all the time.
 
So are there any limits to what these tools can achieve? 
 
Why AI Is So Important for Diagnosis
 
First, you may be asking why it’s so important to have AI-powered diagnostic tools in the first place. Traditionalists may be reluctant to favor a machine’s judgment over the judgment of someone with years of training and experience.
 
The key problem is the sheer complexity of medical decision-making. You can’t diagnose someone (or treat them) based on a single symptom, or even based on a group of symptoms. There are some tests that can determine the presence of certain strains of bacteria and viruses, but even then, a patient’s current condition, medical history, current medications, and other variables can strongly influence how the patient should be treated.
 
The problem becomes even more complex with hard-to-understand diseases like cancer. Despite our significant progress in cancer research, there’s still so much we don’t understand and so much new information emerging, it’s nearly impossible to trust even an experienced oncologist to make accurate diagnoses 100 percent of the time. AI is better suited to tackle those cases because it makes decisions objectively, it’s not subject to bias, it’s up-to-date with the latest information, and it considers each variable with proper weighting.
 
The Indefinite Path of AI Improvement
 
Currently, our AI tools aren’t perfect, but they’re still better than their human counterparts. IBM’s Watson, an exemplary example of modern AI, can outperform experienced human doctors in detecting cancer early. By some estimates, it would take human doctors 160 hours of reading a week to keep up with the latest information—and AI systems can accomplish this, absorbing and incorporating all new information into its decision-making, in a far more reasonable amount of time (without disrupting its primary function).
 
In effect, even our flawed AI tools are better than humans, and there’s evidence to suggest that these tools could keep improving indefinitely. Most algorithms aren’t limited in terms of what they can accomplish—instead, they’re limited by the time it takes researchers to create them and the processing power they require to keep running. 
 
Moore’s Law has come to an end, but processing power keeps improving linearly, which means we’ll be able to support faster and better systems on an indefinitely long curve. And once we’re capable of designing a general AI on par with human intelligence (a moment referred to as the technological singularity), the AI will be improving itself without the need for human intervention at all.
 
Key Limitations
 
In terms of sheer power and eventual growth, there doesn’t seem to be an upper bound for AI, but are there any other limitations of the technology? 
  • Overdiagnosis. Overdiagnosis is a growing problem in the medical industry because, in some ways, we’re “too good” at detecting medical issues. Overdiagnosis may detect early stages of cancer, or another disease, which may never produce symptoms or produce a threat to the patient’s life; in these cases, the patient is treated, sometimes with dangerous approaches, presenting a risk that wouldn’t be there if the “problem” went undetected. 
  • Knowing which questions to ask. Our systems are amazing at answering questions, and processing data to get us closer to the truth, but they still depend on human observers and programmers asking the right types of questions. If we have a fundamental misunderstanding of how a disease works, even the most advanced tools we create to deal with it will be largely useless. 
  • Resistance from human doctors. A percentage of human doctors are, understandably, opposed to the adoption of AI diagnoses. They prefer to trust their own judgment because they’re more familiar with it (or because they want to keep their jobs), and may fight against its widespread adoption.
  • Resistance from patients. Patients, too, may fear putting their lives in the hands of a technology that’s existed for only a few years. This patient resistance could lead to slower adoption rates and less demand, which could stifle research and growth in the sector. 
AI diagnosis still has a long way to go, but already it’s making a massive impact in how we identify and treat some of the most complex illnesses of our time. There are key limitations to what we can accomplish with it, at least for the foreseeable future, but the more we invest into its development, and the more we’re willing to defer judgment to it, the healthier and better-treated we stand to be as a species.
 
FIRST
PREV
NEXT
LAST
Page(s):
[%= name %]
[%= createDate %]
[%= comment %]
Share this:
Please login to enter a comment:
 

Computing Now Blogs
Business Intelligence
by Keith Peterson
Cloud Computing
A Cloud Blog: by Irena Bojanova
The Clear Cloud: by STC Cloud Computing
Careers
Computing Careers: by Lori Cameron
Display Technologies
Enterprise Solutions
Enterprise Thinking: by Josh Greenbaum
Healthcare Technologies
The Doctor Is In: Dr. Keith W. Vrbicky
Heterogeneous Systems
Hot Topics
NealNotes: by Neal Leavitt
Industry Trends
The Robotics Report: by Jeff Debrosse
Internet Of Things
Sensing IoT: by Irena Bojanova

 

RESET