In the News
May/June 2011 (Vol. 26, No. 3) pp. 5-9
1541-1672/11/$31.00 © 2011 IEEE

Published by the IEEE Computer Society
In the News

AI use is increasing in many areas of commercial technology, including business intelligence, speech and handwriting recognition, and security monitoring. At the same time, AI use is also growing in other areas, including technologies for assisting the disabled. This article explores ways researchers are using AI and computer-vision techniques to read sign language and improve mobility aids such as wheelchairs and walkers. This issue also includes news brief articles on shape-changing robots and using AI to simplify mobile recommendation technology.

Researchers, Vendors Use AI to Enable the Disabled
George Lawton

AI use is increasing in many areas of commercial technology, including business intelligence, speech and handwriting recognition, and security monitoring. At the same time, AI use is also growing in other areas, including technologies for assisting the disabled. For example, researchers are using
AI and computer-vision techniques to read sign language and improve mobility aids such as wheelchairs and walkers.
With this in mind, AI is being increasingly implemented in various types of tools designed to help the disabled. Those tools include screen readers and computer-navigation aids for the visually impaired, voice-to-text converters for the deaf, neural interfaces that interpret brainwaves to help users operate their wheelchairs and walkers, and assistive gaming technology. Researchers are now also dramatically improving user interfaces for wheelchairs by adding environmental sensors, sign-language training software, and other applications.
The market for devices to help the disabled appears to have a bright future, which should drive further development. BCC Research predicts that the market for assistive technologies in the US alone will grow from $38.2 billion in 2008 to $49.3 billion in 2013.
Early Potential
The first use of AI in assistive devices began in 1970 at the Haskins Laboratories, a US facility where scientists still today conduct basic research on spoken and written language. They developed an experimental prototype that used optical character recognition to interpret text in a book and then transmitted the output to a rules-based speech synthesizer.
In 1976, author and inventor Ray Kurzweil developed the first commercial book reader for the blind. The device used AI to discern characters from a variety of fonts. The early models cost $50,000 and weighed 300 pounds. The latest assistive reading devices, on the other hand, can be embedded inexpensively in a smartphone.
In the future, AI will play an important role in making the world more accessible for the disabled by simplifying interfaces and device controllers, predicted Jeff Bier, president of Berkeley Design Technology Inc. (BDTI), a consultancy for embedded-processor technology and applications. A key aspect will be improved vision systems that use AI to transform raw visual data into real-world models, he explained.
Today's Approaches
Researchers have developed several improved AI-based approaches for assisting the disabled. Multiple tools, such as PPR's iCommunicator, use AI for speech interpretation and natural language processing. These tools let blind students talk to their computers and translate spoken words into text for email and chat applications.
Helping the Blind Use Computers
Modern computer interfaces create two problems for the visually impaired: reading text and navigating the screen. Text-to-speech readers transform written words using, in part, AI-based speech-synthesis-by-rule techniques. Screen readers use AI techniques to discern screen-layout and menu-navigation patterns and render them via an audio interface.
The first of these navigation tools for Windows systems is the free Nonvisual Desktop Access (NVDA), developed by the Australia-based nonprofit NV Access.
"The only way I could previously get access to Windows was to buy screen-reading products for $1,000 to $2,000," said NV Access president Michael Curran, a computational linguist and University of Sydney research fellow who is blind.
NVDA uses open source software to let developers easily create new features and support multiple languages that all users can share. It now supports 27 languages.
AI and Sign Language
Researchers at Holland's Delft University of Technology are working on AI-based techniques for translating sign language into text or speech. The researchers have applied this technology to their Electronic Learning Environment to help children who are deaf or hearing-impaired learn to sign more quickly. They use AI to help decipher students' hand signals and give feedback when the wrong sign is given.
This and similar tools work with video analytics and other machine-vision techniques, which use AI to discern patterns such as sign-language gestures.
Improved AI techniques will be required to interpret gestures in changing lighting conditions.
AI and Neural-Interface Wheelchairs
Researchers have looked at using AI to improve user interfaces for devices such as wheelchairs since the early 1980s, said Jose Millan, associate professor at Switzerland's Ecoles Polytechniques Federale De Lausanne (EPFL).
One component of this work is the development of brain-machine neural interfaces capable of recognizing the commands that a user's brain activity represents and moving a wheelchair in response.
In these experimental systems, users wear electrodes that measure electrical activity on the surface of the head with electroencephalograph (EEG) readings. The system processes signals from the electrodes to extract patterns associated with a user's intent. Machine learning and neural networks are used, for example, to recognize EEG patterns and the commands they represent.
Honda and Toyota are each working on prototype wheelchairs using this approach, as are researchers at Spain's University of Zaragoza.
A key challenge is reducing the latency between a user's thought and a wheelchair's movement. One line of research focuses on improvements in AI techniques for interpreting gestures, EEG signals, and environmental feedback. Another looks at designing faster and more precise algorithms to reduce processing time.
Future research will evaluate the technology to see how it might apply to a wider audience, noted EPFL's Millan. For example, auto makers are already using some of these principles in their cars, such as by utilizing environmental feedback for obstacle avoidance, said Stanford University consulting professor of computer science Gary Bradski. He is also a senior scientist at Willow Garage, which develops hardware and open source software for personal robotics applications.
Intelligent Walkers
Researchers are incorporating environmental-feedback techniques in AI-based interfaces that control the wheels on walkers. This technology appears in the US Department of Veterans Affairs Personal Adaptive Mobility Aid (VA-PAMAID) robotic walker and the experimental Intelligent Walker from Spain's Technical University of Catalonia. These walkers can detect handlebar pressure and obstacles, for example, to help users navigate and with tasks such as braking while going downhill.
Technical University of Catalonia researchers are also incorporating environmental feedback into intelligent agents used in European Union's SHARE-it (supported human autonomy for recovery and enhancement of cognitive and motor abilities using information technologies) mobility platform. The goal is to develop a system of sensor and assistive technologies that can be integrated as modules into an intelligent home environment to let elderly or handicapped people live more autonomously.
Gaming for the Disabled
Videogames are also incorporating AI technology to help disabled people play. For example, Sony added a new element to its MLB The Show game that can automate some tasks, such as base running, that people with motion-control disabilities like cerebral palsy might have trouble performing.
This approach uses AI algorithms to help with game play. They might compute the shortest path between two points or discern and calculate the optimal location and movement of characters for a changing game state.
Looking Ahead
"One of the challenges of helping the disabled with new technology is that they are disabled in many different ways," said Willow Garage's Bradski.
In many cases, motor impairments make it difficult to interpret speech using traditional approaches because of the great variation in speaking patterns, explained Timothy McCune, president of Integrated Wave Technologies, which is a speech interface vendor that is working on improving audio input for the disabled. Further AI-based research is needed to improve the performance of systems that interpret what people with speech disabilities say, said McCune.
Another challenge is using AI to make spatial-information displays, such as maps and diagrams, on computers and other devices easier for the blind to navigate, noted Curran.
Microsoft's Kinect technology, which uses AI techniques such as pattern recognition to help users operate computer games without an external controller, could help enable the development of niche applications to help the disabled, according to BDTI's Bier.
However, explained Bradski, AI still is not able to function as well as even a young child for many purposes, such as interpreting speech and images. Thus, he said, making AI systems practical for a variety of applications to help the disabled will require much more work, particularly in areas such as interpreting changing textures, shadows, and the hidden sides of objects.
This is a particular issue in using AI to interpret visual data from real-world settings, in which objects are not as easy to detect as they are in a lab, explained Bier. Advanced tools that make sense of the rapidly changing colors, shades, and shadows of real objects still require significant work, he noted.
Nonetheless, said Bradski, "Some of the AI technology we are working on now might get so advanced that even the nondisabled will want them."
Shape-Changing Robots Learn Skills Faster
George Lawton

A University of Vermont academic researcher has determined that robots with bodies that change over time, as is the case with animals, learn new skills and behaviors more quickly than those that remain the same physically.
Associate Professor Josh Bongard said his work has introduced another fundamental variable into AI research that could benefit learning approaches such as genetic algorithms and neural networks.
"We changed [our robot's] morphology to prove that the body plays a role in intelligent behavior," Bongard explained. "We showed that if the body changes, it becomes easier to evolve the controller" to acquire new skills and behaviors.
Although there has been considerable research on increasing the speed at which a robot learns new tasks, little of it has looked at how changing a robot's shape over time might affect the fast acquisition of skills such as walking. Bongard said he found that changing the shape of robots as their control algorithms evolve can help them learn to walk twice as fast and leave them more robust and better able to respond to unforeseen events, such as attempts to tip them over.
Cornell University Associate Professor Hod Lipson, an expert in biologically inspired robots, said Bongard's work is very important. "There is a tremendous amount of work on adaptation of the behavior side of robotics," he noted, "but little work on adaptation of the body, even though in nature, both brain and body coadapt."
The Experiment
Bongard and his team first created a computer simulation of robots and their environment. They then set a research goal for the simulated robots to learn how to reach a box within five seconds. Each simulated robot included 12 moving parts and looked like a highly simplified mammal skeleton. One set of robots started with an upright four-legged stance. Another began with the shape of a legless tadpole and was subsequently given splayed legs and then an upright four-legged stance.
The research team focused on two metrics: an individual's performance and, ultimately, the time it took the robots to learn to reach the target within five seconds.
Evolutionary Robotics
Bongard's research is in the field of evolutionary robotics, which uses computer simulation to help design better robot controllers. Rather than try to design an effective controller from the start, researchers in this field look for ways to enable a better controller to evolve over time.
In their simulations, learning algorithms adjust processing techniques and other variables in robots' controllers to find the optimal solution to research problems. Bongard said that changing the robots' shape over time helps accelerate learning by forcing the robots to experiment with a wider variety of behavior patterns throughout the process.
"In many ways, this is a call for researchers to think carefully about how they are pursuing AI," noted Bongard.
Generally, he said, AI has tended to focus on the type of learning gleaned via books or machine vision but not how individuals use their developing bodies to learn.
Implications
Bongard said his work might prove useful to developing robots that can sense the impact of their movements on the environment and their internal state. He noted that his implementation is a basic example involving just one set of behaviors and a robot with a simply evolving shape. High-level animals are much more complex, he added, so the University of Vermont researchers still have many dimensions to explore.
The scientists plan to continue working with simulations to better understand the principles behind their research. They are currently working on a project to make their simulation software open source by the end of this year so that more people can participate in the research. The software will let users design new robots, set different goals, and share the results.
In the long run, Lipson said, this type or research could influence some of the fundamental paradigms underlying AI. He noted, "This is an important step in the evolution of intelligent machines."
Simplifying Mobile Recommendation Technology with AI
George Lawton

Perhaps the ideal for many mobile-device users is finding an application that can anticipate what they want and find it for them even before they have a chance to ask. That is the idea behind a new AI-based mobile recommendation engine called Seymour that Clever Sense will soon roll out
( www.thecleversense.com/seymour.html). Clever Sense leverages data mining, context awareness, location technology, and natural language processing (NLP) to recommend restaurants, stores, and other places users could visit without requiring them to enter queries.
Seymour guesses what users are looking for based on their past interactions with the application, time of day, and listing of current companions. It harvests and sifts through information on the Web to make recommendations based on user context, explained Clever Sense CEO Babak Pahlavan.
Eliminating the Query
Siri, which Apple acquired last year, was one of the first AI-based recommendation services for mobile-device users. However, Siri requires users to enter a search term for every query. Seymour, on the other hand, automatically recommends restaurants, stores, and other locations based on variables such as the time of day and day of the week. Users also enter a profile based on information about eating, shopping, and other preferences, and the application considers a history of the recommendations that the user did or didn't accept.
Seymour, which requires users to run a special client on their mobile device, automatically presents recommendations when a user runs the application, without the need to start a search. Users can enter more data into a search bar to refine a search if desired. The application uses NLP to parse queries. It utilizes passively gathered location data to provide more helpful suggestions without requiring user input, Pahlavan noted.
"We also found that the people you are with changes things drastically. If you are with your children, the things you are looking for are different than if you are with your wife or buddies," he added.
Under the Hood
The basic Seymour architecture consists of three components:

    • A Web mining element gathers data on various locations from sites such as Yelp and uses AI techniques like NLP to build an interest graph—the network of people a user shares interests with but doesn't necessarily know.

    • A service component searches the interest graph for relevant results.

    • Application software on the mobile device manages interactions with the service component.

Seymour does a good job of recommending activities to users because it lets them enter a lot of information about areas of interest, explained Clever Sense adviser and Stanford Professor Emeritus of computer science Jeff Ullman.
For example, he noted, Clever Sense would let users enter detailed information about restaurants they are looking for including favorite dishes, price levels, ambience, and décor. This not only lets users be more precise in their requests, but it provides more helpful information about places from previous Seymour participants, he said.
Another challenge was optimizing algorithms so that they would generate results more quickly. This has enabled Clever Sense to reduce the time its system takes to process a query on a given device from 9.3 seconds to 200 milliseconds.
Interest Graphs
Seymour uses AI techniques such as machine learning to process data gleaned from multiple websites to automatically generate a list of 200 to 400 relevant characteristics for each location. Traditional recommendation engines support only 10 to 20 features, said Pahlavan.
Seymour uses data mining to analyze website information and help create interest graphs for people and various places. The application uses the graphs to predict which specific places users might want to visit based on profiles for similar users.
Seymour builds the interest graph using a starting vocabulary of about 200,000 words. The NLP uses this vocabulary to interpret text on the Web to normalize the multiple ways people might describe a similar characteristic. The application automatically adds new relevant words it finds on the Web.
Seymour is now in private beta and is slated to go live on Android, iPhone, and Windows Mobile devices this summer. Clever Sense also plans to port the technology to other platforms such as TVs and cars. Pahlavan declined to comment, however, on the business model the company will use to make money from its application.
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.