The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May-June (2012 vol.27)
pp: 4-9
Published by the IEEE Computer Society
ABSTRACT
Multiple approaches are incorporating AI into healthcare systems, but AI use in healthcare still faces several challenges, as George Lawton describes in “Healthcare Has Mixed Feelings about AI”. Lawton writes in the other articles in this issue's department about game developers using AI to help build videogames in “Researchers Use AI to Build Games”, and about a coordinated swarm of flying robots which play musical instruments in #x201C;Robots Coordinate Flight Paths to Play Music”.
Healthcare Has Mixed Feelings about AI
George Lawton
For decades, AI and medical experts have discussed how AI could improve healthcare.
They've said it could help doctors diagnose diseases and choose treatments, enable better analysis of large collections of patient information, and reduce
human error in recording data and providing medications in the correct dosages.
Early enthusiasm for AI use in healthcare faded during the 1990s, because cost and computer performance and networking limitations prevented early systems from delivering all of the promised benefits, says Ed Shortliffe, a well-known physician, biomedical informatician, and computer scientist, as well as ex-president of the AMIA (formerly called the American Medical Informatics Association).
Now though, improvements in AI technology, processing power, network speeds, storage, and the keeping of electronic health records (EHR) are increasing AI use in healthcare. Techniques such as natural language processing (NLP), decision support, neural networks (NNs), and expert systems are being implemented.
The technology is frequently embedded into various types of medical software, tools, and equipment.
It is being employed to improve treatment workflows, enable data exchange among different EHR databases, and make sense of reams of information that new diagnostic techniques, as well as genomic and protein-analysis tools, collect.
Nonetheless, AI use in healthcare still faces significant challenges.
Early Days
During the 1970s and 1980s, researchers began looking at utilizing AI to improve medical decision-making. Early programs used expert systems to help with clinical decision support.
For example, University of Pittsburgh researchers developed Internist-I in 1974 to improve general diagnostics. Stanford University scientists designed Mycin in 1976 to help doctors choose the best antibiotic for different types of infections. And a Stanford team released Oncocin in 1979 to help diagnose whether a set of patient symptoms might indicate cancer.
Shortliffe, who helped develop Mycin and Oncocin, says these early applications led to considerable media interest and outside investment.
However, high costs and limited networking capabilities restricted these applications' usefulness. A physician or nurse had to take time to go from patient or treatment rooms to the centralized location of the computer running the AI application to get information. Moreover, users couldn't integrate most of the early systems into their ongoing workflows.
Thus, interest in using AI in healthcare declined until recently.
AI Healthcare Implementations
Since then, developers have focused on making sure their AI-based medical decision-support systems could integrate into existing physician workflows, says Jack Smith, dean of the University of Texas Health Science Center at Houston's School of Biomedical Informatics.
Researchers have begun designing a new generation of systems reflecting the criticisms of early AI-based medical-care applications' interfaces and about their limited networking, storage, and processing capabilities. They've also taken advantage of the availability of new technologies.
In some cases, AI is being incorporated directly into EHR tools. In others, systems such as IBM's Watson supercomputer are providing medically related services such as speech recognition, decision-making, and diagnostics. Some applications use machine vision and NNs to analyze biopsy samples and make diagnoses.
Mayo Clinic
Mayo Clinic researchers are employing an artificial NN to analyze large collections of patient data to diagnose cardiac infections with fewer invasive exams.
The scientists trained the NN with a dataset from patients diagnosed with heart infections. A test on information from a second set of patients showed that the approach was accurate in 72 of 73 cases.
When used on new patients' data, the system eliminated the need for an invasive test in about half the cases in which a doctor otherwise would have requested such a procedure.
Intelligent Predictive Tools
Other researchers are looking at AI to better analyze the enormous amount of data now stored in EHRs, and to make diagnoses and recommend treatments.
For example, the nonprofit UhealthSolutions is working on a suite of tools for mining such information.
The suite uses NNs to recognize patterns and identify patients at high risk of potential health problems that require early monitoring. It also employs genetic algorithms to look at large collections of disease patterns and the progression of patients' preexisting conditions. The algorithms test and adjust different rules that correlate the preexisting conditions to disease outcomes.
Fuzzy logic provides a flexible way to analyze complex data. Rule-based systems recommend a progression of questions useful for identifying a disease and, on the basis of the answers, suggest reference sources relevant to the suspected condition.
UhealthSolutions researchers say that using a combination of AI techniques improves the accuracy of disease-outcome predictions to 75 to 95 percent from the 20 percent achieved with statistical tools not based on AI.
IBM Watson
IBM has taken its Watson supercomputer's sophisticated NLP and query engine into the medical arena.
This application—which can sift through 200 million pages of material in three seconds—uses NLP to interpret spoken or textual queries and a knowledge-based system to provide decision support. It is being tested at the Columbia University Medical Center, the University of Maryland School of Medicine, and Los Angeles' Cedars-Sinai Medical Center.
Researchers hope the system can automatically analyze patient records, medical journals, and clinical-trial data to identify patterns and suggest treatments.
C-Path
Researchers are turning to advanced machine-vision and machine-learning techniques to improve the visual analysis of microscopic images.
For example, Stanford University School of Engineering and Stanford School of Medicine scientists have developed the Computational Pathologist (C-Path) to better analyze images of breast cancer cells (see Figure 1).


Figure 1. Stanford University scientists have developed the Computational Pathologist application to study images of breast cancer cells. Traditionally, humans can analyze only three potentially problematic features. The Stanford application, on the other hand, analyzes 6,642 factors, which lets it more precisely distinguish between different cancer patterns.

Traditionally, humans have been able to analyze only three potentially problematic features in these cases: the percentage of the tumor comprised of tubelike cells, the diversity of the nuclei in the outermost cells, and the frequency of cell division.
In contrast, C-Path can analyze 6,642 factors. Using more factors affords greater precision in distinguishing between many closely related patterns of cancer, which yields better treatment strategies.
The researchers trained the system with data from patients whose prognosis was already known.
NCCD
The National Center for Cognitive Informatics & Decision Making at the University of Texas at Houston is conducting research to improve the usability of intelligent systems within healthcare workflows, as well as develop improved EHR standards, healthcare-related IT systems, and medical-device communications.
The US Office of the National Coordinator for Health Information Technology recently provided US$15 million to help fund the project.
To Use AI or Not to Use AI
With all of its benefits, AI use in healthcare still faces several noteworthy challenges.
A big hurdle is the difference in the semantics that various medical-related applications use, says the University of Texas's Smith. Hospitals, medical personnel, and even billing systems also describe similar conditions in different ways.
As AI tools try to work with data from these sources, translation mistakes could lead to errors with potentially serious consequences. Better semantic-processing tools are needed to cope with these issues.
Another challenge lies in integrating AI technologies, additional types of software, interfaces, and other components into a single application that provides both a smooth user experience and helpful knowledge, says Smith.
These capabilities must be deeply embedded in healthcare-related information systems and workflows for them to be used successfully, says Microsoft Research distinguished scientist Eric Horvitz.
Another concern is that the sophisticated new EHR systems with AI-based tools could be hard to use and could thus slow down the patient-treatment workflow. Smith says the time doctors must spend entering patient information in an EHR system could reduce the number of patients a physician sees by up to 50 percent while they are learning to use it and by 10 to 25 percent later.
And experts will have to monitor even sophisticated AI applications to make sure their diagnoses are appropriate for a given patient's context.
Toward an Intelligent Future?
Smith predicts healthcare professionals will increasingly employ NLP to extract useful information from text.
He also says AI will be embedded into systems and products in ways that will be easy to utilize, thereby encouraging their use. This type of integration could eventually lead to devices that even consumers could use for health evaluation, he notes.
In the long run, Smith says, AI techniques will be necessary to make sense of the mountains of data now being collected that are too much for individuals to analyze.
Andrew Beck—a doctor who is currently a Stanford PhD student in biomedical informatics and who also worked on C-Path—says new machine-vision techniques will lead to a wealth of new information and massive databases. However, he notes, making sense of all this information will require more powerful computers and AI-based tools.
Researchers Use AI to Build Games
George Lawton
A UK doctoral student has developed an evolutionary-computing approach to designing video games from the ground up.
Imperial College London PhD candidate Michael Cook developed the Angelina (a recursive acronym for "a novel game-evolving labrat I've named
Angelina") game-development application.
Previous research has looked at evolving only certain elements of existing games. Cook says his work is the first to look at evolving multiple elements simultaneously to create new games.
Angelina uses a variant of computational evolution called cooperative coevolution, which divides a big problem into smaller parts and solves the subparts independently in order to resolve the larger question.
The purpose is to optimize games for fun, which the system calculates on the basis of degree of difficulty and the length of time necessary to complete levels.
Angelina can generate and test up to 50,000 variations in a full game-creation cycle.
Working with Angelina
Angelina begins creating a game with all of the graphics—including enemy animations and game layout elements such as walls and trees—supplied by a human developer.
The application arranges the graphical elements in multiple ways and generates rules for different versions of a game.
Angelina considers matters such as the layout of game elements for each difficulty level and "powerups," which are items that give players special abilities such as increased speed or jumping ability.
"The system is built up of multiple evolutionary processes," Cook explains, "but each process is only responsible for part of the solution."
During each step, if the difficulty-level design process develops certain characteristics, such as a particular pattern of barrier walls, other elements will evolve accordingly. For example, if the system makes walls higher, it might also give players more jumping ability.
Angelina also looks for problems with the specific game designs it creates.
During each evolutionary cycle, the system evaluates its population of games, selects the best designs, and recombines them to create the next set of candidates to test.
Cook says Angelina is effective because its processes mimic effective game design, a cooperative task involving artists, designers, programmers, and producers working independently and then coordinating ideas and influencing one another.
Angelina isn't totally autonomous, as a human must still create the graphics, sound effects, and music that go with a game.
Challenges
According to Cook, a big challenge is defining a meaningful way to calculate how much fun a game is to play.
"At the moment," he explains, "all I can use is basic heuristics and ideas I've drawn from talking to game designers." He says he's still looking for ways to improve this process, which is an important part of Angelina's effectiveness.
Another key issue is the time and computational resources required to simulate game play and calculate fun.
Angelina has few problems with this process when working on the relatively simple 2D maze games with enemies and powerups for which it has been initially designed. However, complications could arise for work on graphics-intensive console or PC games.
Looking ahead with Angelina
Cook says his techniques could be useful in completing and improving partial game designs that other people start.
He says techniques similar to the ones Angelina uses could be applied to games such as 3D shooters or those involving real time strategy.
Imperial College London research fellow Cameron Browne says he has been using a standard evolutionary approach, one without cooperative evolution, to evolve various board games. However, he explains, the standard techniques are inherently random and thus don't necessarily explore all options that could yield highly effective games.
Cook says none of the games that Angelina has developed—which are on his Games by Angelina website ( www.gamesbyangelina.org)—are the equal of today's advanced commercial games.
However, he notes, some are clever, and many are like early arcade games or those currently found on smartphones.
To make the games more complicated and fun, Cook says, he is exploring ways to include human feedback toward the end of Angelina's game-design process, perhaps via simple yes/no surveys whose responses could be fed into the application.
The next big step, he explains, will probably incorporate multiple cooperative coevolutionary processes for each of the various design stages. Angelina would thus start by prototyping high-level game designs and then work on increasingly more specific development stages.
He says, "I hope my research will help computational-creativity researchers build better, more complex design systems. Angelina will also, over time, incorporate many other AI techniques and investigate their usefulness in hard creative problems."
Cook plans to release Angelina as open source software so that other game developers can adopt and adapt it.
"One thing the games industry could do," he explains, "is open up its data, engines, and games to academics for us to work and experiment with."
"Obviously," he adds, "I'm not expecting every publisher and developer to hand everything over for free. But even a great API … could allow researchers to get their hooks into a real, live game and start building systems to generate content for it."
Robots Coordinate Flight Paths to Play Music
George Lawton
Two US graduate students have developed a team of small flying robots that can coordinate their movements on their own well enough to play musical instruments.
The research addressed the challenges of precisely controlling swarms of
cooperating robots capable of coordinating their movements, which has numerous potential military and other uses.
Alex Kushleyev and Daniel Mellinger—doctoral students in the University of Pennsylvania's General Robotics, Automation, Sensing, and Perception (Grasp) Lab—designed the "quadrocopters."
Kushleyev's and Mellinger's advisor, professor Vijay Kumar, presented the robots' first musical performance, of the James Bond theme song, at a recent TED conference (see Figure 2).


Figure 2. Two University of Pennsylvania doctoral candidates have designed flying robots that coordinate their motions on their own sufficiently well to play drums, a keyboard, maracas, a cymbal, and a guitar-like instrument in harmony.

The devices coordinated their actions well enough to play drums, a keyboard, maracas, a cymbal, and a guitar-like instrument in harmony.
System Anatomy
The researchers built the flying robots with accelerometers to sense movement, gyroscopes and 3D magnetometers to measure rotation, and barometers to detect height, according to Mellinger.
The small, agile aircraft used in the musical performance run for 11 minutes on a charge, while larger drones used in other experiments run for up to 30 minutes, he notes.
Each quadrocopter runs the open source Robot Operating System on an ARM7-based low-power processor. The researchers used these relatively slow processors to keep costs down.
A motion-tracking product from Vicon Motion Systems uses machine vision to estimate the location of each drone on the basis of video from multiple ceiling-mounted cameras in the room where the robots are flying.
The system feeds this positional information to the base-station computer, a standard PC that acts as a centralized control server. It issues commands to each drone using the ZigBee low-power, low-speed wireless communications technology.
The control server calculates a function representing the intended collective movement of the group, based on instructions from the researchers. It then feeds the function to each drone's less powerful, processor-based onboard controller. This controller directs the robot's path so that it avoids obstacles and is in the right place at the right time for the task it's performing.
The control server continues to receive positional information from the motion-tracking system and, if the drones stray off course, sends them new commands.
A program running on the drone interprets high-level commands from the control server and translates them into changes to each of the drone's four rotors—which can turn at different speeds from one another—at up to 600 times a second. This enables precise aerobatic movements.
The system's architecture distributes the various tasks to the most appropriate elements and thereby reduces latency and works better than using a single controller for all functions.
Ramifications
The swarm-control project builds on previous Grasp Lab work on developing precise controllers that enable individual robots to perform acrobatic maneuvers.
As part of the project, the Grasp team has developed algorithms for enabling drones to hold objects on a flying platform, which could tilt to the side if the aircraft doesn't fly evenly. In the quadrocopters' musical performance, this capability let some robots work together to carry sticks and use them to hit the cymbals.
A big challenge lies in developing a system that works outside a closed, laboratory-based motion-capture environment, according to Mellinger. This would require a different motion-capture system, he says.
In the long run, he noted, this technology could have many useful applications in areas such as agriculture and infrastructure inspection.
Of the small robots, Kumar adds, "They can operate indoors, in tightly constrained environments. We are interested in using them for search and rescue, disaster recovery, and other missions in which it is too dangerous for human workers to operate."
The smaller robots are not yet available for purchase, but the Grasp team's larger, 3-foot-long drones are being sold by the group's spinoff company, Kmel Robotics, Kumar says.
6 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool