"AI in the Doctor's Office" describes diagnostic techniques based on neural nets and other intelligent techniques and a multiagent system for comprehensive patient management. "They Swarm" covers microrobots that act collectively on the insect swarm model for tasks such as precise drug delivery. "I Eat, Therefore I Am" describes several proposals for autonomous bots that power themselves by "eating" plant or animal matter.
AI in the Doctor's Office
Imagine you're aboard a spacecraft en route to Mars. Each radio transmission to or from Earth can take 20 minutes, making normal conversation with home impossible. So when you and your crewmembers need help working through the stress of your lengthy mission, you turn instead to an onboard computerized personal assistant and therapist. Like Eliza, the famed psychologist program that helped introduce the world to AI, the NASA-funded Virtual Space Station simulator, being developed by the US National Space Biomedical Research Institute, asks simple questions. Going beyond Eliza, however, this program uses the answers to build an assessment of the astronaut's mental state. Then it helps each crewmember develop healthy behaviors aimed at avoiding depression.
It's easy to imagine how medical diagnostic programs similar to the Virtual Space Station could serve as useful tools in today's overburdened healthcare system. But don't count on getting an appointment with a virtual physician anytime soon. A medically sanctioned, earthbound version of NASA's spacecraft therapist might be as far in the future as an actual mission to Mars.
Indeed, although AI technologies now routinely help at everything from locating oil deposits to choosing what products to buy online, those technologies have yet to achieve a firm foothold in the medical field. As with another IT application, electronic medical records (EMR), the healthcare industry has been slow to embrace AI. One reason is that the field is subject to a bevy of regulations, which vary from country to country, and thorough safety-testing of new treatments can take a decade or longer. Another is that many medical professionals are reluctant to abandon the traditional disease-testing methods on which they have steadfastly relied.
Black Box Advice
Nevertheless, many promising applications under development have the potential to make AI medical diagnosis happen. Numerous studies, for example, reveal that neural nets can successfully diagnose a variety of ailments.
In 2002, the National Center for Biotechnology Information published a study titled "Classification and Prediction of the Progression of Thyroid-Associated Ophthalmopathy by an Artificial Neural Network." Thyroid-associated ophthalmophy is a condition associated with the Graves' disease, an autoimmune disorder. The study found that a neural net trained using case-history data could predict whether a patient exhibited the condition with nearly 80 percent accuracy. Moreover, the network could predict with close to 70 percent accuracy how far a patient's disease had progressed.
Even more dramatic results were reported two years later in the Journal of Investigative Dermatology, in "Melanoma Diagnosis by Raman Spectroscopy and Neural Networks: Structure Alterations in Proteins and Lipids in Intact Cancer Tissue." The authors reported that "neural network analysis based only on the spectral information allowed us to diagnose MM [melanoma] with a sensitivity of 85%, and specificity was 99%. This is comparable to the diagnostic accuracy for MM achieved by trained specialists in dermatology."
More recently, research results involving neural nets announced in September 2009 by the Mayo Clinic reveal how AI could possibly be a safer alternative to traditional physical testing procedures. The study, performed on 189 patients over a 12-year period, looked at endocarditis, an infection that occurs in the heart's valves and chambers, which especially afflicts patients with implanted pacemakers and other medical devices. The physical test for endocarditis involves inserting a probe down the esophagus, a procedure so dangerous that some hospitals reportedly refuse to perform it. Even so, detection of the disease is vital. When left untreated, endocarditis commonly proves deadly and fraught with complications, with mortality rates that can top 60 percent.
In their search for a safer way to detect the disease, Mayo Clinic researchers used known case histories to train several neural nets to recognize symptoms of endocarditis. At the conclusion of training, the best neural net could correctly identify "72 of 73 implant-related infections and 12 of 13 endocarditis cases…with a confidence level greater than 99 percent," according to a report on the research.
Neural nets are by no means the only AI tool that has shown potential in medical diagnosis. Intelligent algorithms similar to those that enhance photographs for forensic purposes can be used to help physicians better interpret medical scans.
In "An Efficient Algorithm and Architecture for Medical Image Segmentation and Tumour Detection," researchers from Brunel University in the UK report success using a technique known as Haar Wavelet Transform Factorization (HWTF) to speed up and more accurately render scans using an image-rendering method known as multiresolution analysis (MRA). This involves scanning a patient at varying frequencies to overcome image-processing shortcomings inherent within individual frequency ranges. As the authors describe it, "MRA is designed to give good time resolution and poor frequency resolution at high frequencies, and poor time resolution and good frequency resolution at low frequencies. It enables the exploitation of image characteristics associated with a particular resolution level, which may not be detected using other analysis techniques."
The UK researchers combined MRA with other common medical image-rendering techniques to create a composite image that takes advantage of each. The goal: a highly accurate rendering that would give practitioners a definitive view of the possible tumor. Normally creating such a composite would be a computationally intensive operation. However, the HWTF technique speeds up the process, creating an image that can be transmitted and displayed using common medical image codecs.
The Big Picture
Despite their usefulness, these AI techniques each focus only on a particular application or on diagnosing a particular condition. This silo approach is common and likely inevitable in medicine, where specialized researchers and practitioners largely confine their activities to the area of their expertise. But it makes it challenging for practitioners to assimilate and utilize the vast array of information on new discoveries, techniques, and best practices.
Consider clinical guidelines, for example. These documents detail the best practices or latest information concerning a medical condition. Despite their benefits, note scientists working in the Research Group on Artificial Intelligence at Spain's University Rovira i Virgili, "clinical guidelines aren't widely used in clinical practice, mainly because doctors don't have the tools to easily integrate them into the daily workflow."
The Spanish researchers' solution is a multiagent system, designed to form the basis of a comprehensive patient-management system. The platform emerged from several years of research, including "academic, proof-of-concept exercises," the group notes. "But they point to the kind of problems in medicine that intelligent agents can help solve in the near term." (See "Applying Agent Technology to Healthcare: The GruSMA Experience," http://doi.ieeecomputersociety.org/10.1109/MIS.2006.108.)
The resulting platform consists of eight types of agents designed to collaborate across the entire healthcare industry supply chain, from patients to practitioners, care-providing institutions, service firms such as testing labs, research organizations, and medical databases. The model is analogous to the way the US Defense Department and large manufacturers bring suppliers, distributors, and users under the umbrella of a single collaborative system.
Within the platform's framework, individuals, medical centers, doctors, and service firms were assigned agents, each of which had the ability to access information within its particular domain. Every doctor within the system, for example, had an agent that kept his or her schedule. A specialized medical records agent was endowed with enhanced security features to insure the privacy of the patient data it stored and retrieved. The system also included an agent charged with accessing clinical guidelines and a broker agent that received and transmitted requests from other agents in the platform.
Working in concert, perhaps in a cloud environment, these agents could greatly streamline the patchwork approach to medical care that critics view as costly and inefficient. As the Spanish researchers describe the resulting workflow, when doctors suspect their patients may be suffering from a particular disease, they could request their personal agents to locate the clinical guidelines for best-practice information.
The guidelines might suggest that the doctor obtain more information from the patient's history. So the doctor's agent would again be tapped to request that information from the medical records' agent. If the guidelines recommended that tests be performed, the doctor's agent could schedule an appointment with a test service provider's agent. Test results would be automatically sent to the medical records agent and to the examining physician.
Scheduling hospital stays and follow-up treatments such as in-home nurse visits could be handled similarly by the physician. And presumably insurance companies and government payers could be given agents of their own to streamline billing. To keep the system up to date with the latest practice and research information, the Spanish group developed a specialized Web crawler tasked with creating a large-scale medical ontology.
About the only thing lacking in the system is an animated interface along the lines of the Mars-mission spacecraft psychologist that would allow system users to vocally address their agents and receive a reply in kind. But that is likely a minor problem when compared to the intricacies of linking the entire healthcare supply chain, and we may see just such an interface before astronauts head off to Mars.
Robotic designs continue to mimic natural evolution. As the technology matures, designers are presenting increasing variations in size, shape, and function. Whereas robots were once envisioned as monolithic creatures with superhuman intelligence and strength, newer concepts include swarms of insect-inspired, cheap-to-produce devices that operate via a collective intelligence.
Developing a proof of concept in this area is a group of European researchers working under the project name I-Swarm, short for Intelligent Small-World Autonomous Robots for Micromanipulation ( www.i-swarm.org). Measuring just four millimeters square and equipped with flea-sized motorized legs along with a solar cell for power, the microrobots might one day prove ideal for surveying disaster areas or performing tasks as mundane as housecleaning.
Whatever their function, the goal of the I-Swarm is to accomplish it with as little human intervention as possible, according to the project Web site. Through constant communication among themselves, the individual bots will devise emergent behaviors that allow them to complete their assigned objectives in the most efficient manner.
Using the model of an ant colony, the research group assigned the roles of scout and worker to groups of microbots. The robots in each category then work together toward a particular goal. For example, scouts might be directed to disperse evenly across a given area to collect information. Worker microrobots might likewise occupy the area, acting on information from the scouts to perform a specific task. For now, the worker robot tasks are limited to things such as maneuvering in an orderly line through a series of obstacles that the scouts have located for them.
At École Polytechnique Fédérale de Lausanne in Switzerland, researchers on a similar project called MiCRoN are working on wirelessly powered robots to tackle more complex tasks ( http://lsro.epfl.ch/page66048-en.html). According to the project's Web site, the robots will operate autonomously, taking on tasks such as moving living cells or assembling microdevices. The robots will orient themselves using an image-based positioning system, onboard cameras, and communication capabilities. Plus, they'll be equipped with millimeter-sized claws and a tiny syringe for injecting substances into biological cells.
Meanwhile, research underway at NanoRobotics Laboratory at the École Polytechnique de Montréal ( http://wiki.polymtl.ca/nano/index.php/NanoRobotics_Laboratory) is taking robot swarms to an even smaller level. Measuring just 300 square microns, a solar-powered bot in this project acts as a kind of sensor, transmitting its findings to a nearby normal-sized computer. The computer then directs an electromagnetic signal toward magnetotactic bacteria, which naturally create magnets through biochemical means and maintain the finished product within their cell walls, perhaps as a navigational aid. The electromagnetic signal sent via the computer causes the bacteria to herd the robot in a desired direction. So far, project director Sylvain Martel has managed to enlist 3,000 bacteria to move the microscopic robot (see a video of this at www.technologyreview.com/blog/editors/23533/?a=f), demonstrating how the device might one day serve as an ultraprecise drug delivery mechanism. Maneuvered by the bacteria, the robot could dispense medicine into targeted cells or perhaps use microscale claws to devour a tumor.
DARPA projects routinely outstrip science fiction. Here's a recent example: autonomous, mobile robots designed to power themselves by eating live plant matter while patrolling the battlefields of the near future. A robotic arm aboard the box-shaped wheeled device snatches vegetation. The power plant that enables the bots to keep going longer than the Energizer Bunny is a boiler-like device that converts the ingested raw cellulose into heat energy.
Converting biomass to energy is nothing new, of course. The more exciting technology of these combat bots is their AI software, which enables them to distinguish machine-digestible flora from other objects liable to be found on a battlefield. The robots' AI was based on 4D/RCS, a three-decade-old $125 million project within the US National Institute of Standards and Technology, aimed at creating a reliable autonomous control system.
When word of the project reached the blogosphere last summer, there was rampant speculation that the robots were actually designed to subsist on animal flesh. This led some to envision hordes of metallic, insectoid creatures hunting down human prey. One blogger noted that the confusion may have resulted from the project's press release, which said simply that the robots would procure and digest "biomass," a term which might conceivably include animal flesh.
It took an article in the July 2009 issue of Scientific American to set the record straight. The robots were part of a DARPA project called EATR, short for Energetically Autonomous Tactical Robots. Two firms—Robotic Technology, Inc. and Cyclone Power Technologies—were designing the prototype. Neither company had any idea "that their proposed machine would set off one of humanity's worst fears: the dawn of an artificially intelligent race of self-sufficient mechanical devices with a hunger for organic meals (including people)," the magazine reported. The robots, the article hastened to add, were strictly vegetarians.
Notwithstanding, others have proposed autonomous bots that subsist on animal flesh, albeit from the lower end of the food chain. A decade ago, a group of UK researchers published an article titled "Artificial Autonomy in the Natural World: Building a Robot Predator." Noting that all living things subsist by extracting energy from their environment, the authors proposed creating that same form of autonomy in the robot domain: "autonomous robots with animal-like self-sufficiency both in terms of energy and information. The robots will live free on agricultural land, hunting and catching slugs, and fermenting the corpses to produce biogas, which will fuel the generator providing the robots with power."
As so often happens with military technology, civilian spin-offs eventually emerge. UK designers James Auger and Jimmy Loizeau devised a fly-catching home robot that subsists on the insects it snares ( www.auger-loizeau.com/index.php?id=13). While admitting that their concept stretches the definition of robotics, they write that their device is intelligent to the degree that it can sense the environment and possesses the motivation—thanks to its programming—to capture biomass for sustenance. Let's hope it doesn't evolve.