In the News
January/February 2011 (Vol. 26, No. 1) pp. 18-21
1541-1672/11/$31.00 © 2011 IEEE

Published by the IEEE Computer Society
In the News

Scientists and businesses are increasingly using computer vision systems for tasks such as identifying important research results, developing highly functional robots, making videogames more realistic, and enabling surveillance cameras to recognize potential security problems. Considerable effort is now under way to add more intelligence to vision systems to make them even more effective. Other apps in the works include using AI to unravel market complexity and intelligent exoskeletons that help paraplegics walk.

Researchers Add Intelligence to Vision Systems
Mark Ingebretsen

Scientists and businesses are increasingly using computer vision systems for tasks such as identifying important research results, developing highly functional robots, making videogames more realistic, and enabling surveillance cameras to recognize potential security problems. Considerable effort
is now under way to add more intelligence to vision systems to make them even more effective. Companies such as IBM, Intel, Microsoft, and Sony are pouring millions of dollars into intelligent vision systems, according to Ian Weightman, president of IMS Research, a market analysis firm.
The Robocup soccer tournament founded in 1997, Honda's Asimo robot unveiled in 2000, and the Graduate Robot Attending a Conference (Grace) developed by Carnegie Mellon, the Naval Research Laboratory, Metrica, Northwestern University, and Swarthmore College for the AAAI's 2002 national meeting were early examples of intelligent vision systems using basic object recognition.
More recent examples include Microsoft's Kinect system for the Xbox, which uses computer vision to recognize movements. This creates a controller-free interface that lets users direct game activities via body motions. Intel is devising ways to let small devices such as augmented-reality glasses process complex video information. The University of Wales' Cognitive Robotics Research Center and its Global Academy have also partnered with Sony UK to develop vision-recognition systems for use in, for example, product inspection. IBM is creating visual indexes of large amounts of broadcast news footage for intelligent content retrieval. And the automotive industry has already adopted advances in vision systems to make cars able to recognize and warn drivers about road hazards and dangerous conditions, according to Weightman.
Weightman believes intelligent vision systems will be a major field for AI research. "Key elements of an intelligent video solution, including compute engines that can process high-definition digital-video streams in real-time, high-capacity solid-state storage, and advanced video analytic algorithms have finally evolved to the point at which performance has increased and costs have fallen sufficiently," he explained.
IMS predicts these advances will routinely enable robots and other devices to intelligently see and interact with their environment, perhaps before 2020.
Of course, some observers are cautious about such projections, as numerous past AI predictions have failed to pan out.
Spotting Threats in a Crowd
Researchers at Finland's VTT Technical Research Center have discovered a way to process the massive amounts of visual data captured by surveillance video systems in airports and other high-traffic public locations that are potential terrorist targets.
The EU-funded machine-learning-based project is called Subito ( www.subito-project.eu). (The project name reflects the project's goals: surveillance of unattended baggage and the identification and tracking of the owner.) Currently, the researchers are creating an ontology of relationships between pieces of baggage and their relative location within a facility to develop algorithms able to spot dangerous situations.
According to a project report, the algorithms will try to identify the intentions of people captured by cameras, whether alone or in a crowd. For example, the system might be programmed to recognize a person hiding a bag under a chair and leaving the scene.
It will have a large data set of luggage types so that it can distinguish baggage from other items. The system will also be able to differentiate people from other objects.
Subito would create a spatial association between a bag and its owner. When that space exceeds preset limits, the system begins tracking the person's movements. It could also review video taken earlier to determine the subject's previous path, which could help identify and apprehend accomplices.
For complex situations in which individuals or crowds create dynamic visual obstacles and prevent computation of likely paths of subjects being tracked, Subito researchers are developing intelligent-agent-based pedestrian simulations. They would generate real-time predictions of likely paths.
3D Motion Capture
Traditional intelligent vision systems work with 2D shapes, colors, and movements. Some researchers are adding depth information, letting them function in three dimensions.
Iowa State University Assistant Professor Song Zhang has developed a 3D digital motion-capture system able to render a moving object at 180 frames per second. It offers more than 300,000 points of reference per frame—a level of detail far exceeding that of current gaming consoles—and can render speech-associated mouth movements and photorealistic facial expressions.
AI algorithms added to Zhang's system could, for example, read lips or, in medical applications, identify unhealthy heart activity. The system's 3D detail would enable the AI algorithms to function more precisely.
Intelligent-vision entertainment applications could include videogame characters that move realistically based on motion-capture information derived from, for example, athletes. In medicine, the system could let surgeons see past blood and other obstructions to noninvasively create a highly nuanced visual model of, for instance, a patient's heart.
Compared to the facial- and object-recognition abilities in security systems that capture 2D information, 3D provides more clues and gives more accurate information about human motion, Zhang noted. This would prove particularly useful if a computer could analyze those subtle changes in real time.
With millions of data points mapped every second, Zhang's system generates large amounts of information. Future applications will require specially designed software able to make sense of so much data.
"If intelligent algorithms can find a way to analyze the 3D motion data and extract the information needed, it will have an enormous impact in many fields," Zhang explained.
AI research could also help with complex motion-data analysis. David Cox, a principal investigator at the Rowland Institute, a scientific research organization at Harvard University, is focusing on graphics-processing units (GPUs). The fast special-purpose processors, first widely used to power applications such as videogames, have increasingly been harnessed for high-performance computing tasks.
Cox and his team analyzed thousands of models to find those best suited for object identification. With the help of GPUs, the resulting models could identify objects, regardless of background and positioning, better than existing vision systems.
"GPUs are a real game-changer for scientific computing," noted Pinto.
Consumer Applications
Yale University Associate Professor Eugenio Culurciello has developed a small, energy-efficient field-programmable-gate array prototype aimed specifically at vision processing. Users could teach standard vision-processing neural algorithms running on Culurciello's chip to recognize everyday objects. He said the technology could be used in, for example, assisted-living facilities to monitor patients' body movements to identify someone falling or becoming ill.
If Culurciello's chip—or one like it—was mass produced at a low cost, it could become commonplace in everyday products. For instance, a group of National Sun-Yat-Sen University researchers are using neural networks married to a vision system to enable toys to recognize and react to children playing with them. The research's long-term goal is creating fully functional robots that can recognize and understand their human operators.
Correctly designing and developing the necessary hardware and software—and making sure they work together properly—will be a significant challenge for advanced-vision-system researchers and vendors. This will require extensive and time-consuming testing and validation, particularly for mission-critical technology developed for the military, as well as for the healthcare and security industries.
Thus, it could be some time before significant advances in intelligent-vision technology show up routinely in specialized and consumer applications.
Using AI to Unravel Market Complexity
Mark Ingebretsen

One lesson that government regulators took from the recent economic crisis is that they lack the tools they need to spot dangerous financial trends in time to prevent them. The lack of effective tools also hurt wealth managers and private investors, who saw untold billions of dollars vanish in the
wake of the recent global economic meltdown.
Two new programs in the EU and US promise to help regulators and investors by analyzing the vast amounts of financial information on the Web.
In Europe, the three-year, 4.574-million Euro FIRST (which stands for Large-Scale Information Extraction and Integration Infrastructure For Supporting Financial Decision Making, http://project-first.eu) project is embarking on a massive effort to extract and analyze financial-market data, as well as other relevant information such as blogger comments. FIRST's eight founding members include Italy's Banca Monte dei Paschi di Siena and Boerse Stuttgart, a Germany-based stock exchange.
The group's goal is to extract and organize online data from structured, quantitative sources such as daily stock-trade reports and from other sources such as financial chat-room comments and blogger opinions. The latter tends to be unreliable, so reliability assessment must be performed, noted project coordinator Tomás Pariente Lobo, who is a project manager with the Spanish IT firm Atos Origin.
Also, he added, "The data is mostly unstructured and as such more difficult to analyze."
FIRST's main challenge is not simply the volume of data it must handle, he said. As the system organizes data retrieved from the Web, it will employ scalable machine-learning models designed to predict and detect events, he explained.
Other, multicriteria models using AI will provide regulators and financial-services workers with alternatives for making decisions based on large amounts of collected data. FIRST is still in its early stages, so participants have not developed many details of the project yet. They hope to perfect simple interfaces that will let even nontechnical users work with the system. And they want to provide real-time analysis capabilities, which is important in the fast-moving world of finance.
Researchers at the US Department of Energy's Office of Science are also looking at ways to help regulators and others keep up with the deluge of financial information.
"The security and stability of markets is a national-security issue of the first order, as recent events have made clear," said David Leinweber, head of the Department of Energy's Berkeley Laboratory's Center for Innovative Financial Technology.
The CIFT program, still in its early stages, will employ the Berkeley Lab's supercomputers to model and thus better understand market movements. This is a new, but highly appropriate use for high-performance computing, explained Leinweber.
Previous computationally intensive market analysis has at times relied on data mining. However, Leinweber said, data mining relies on establishing data patterns, which could be a hazardous approach for financial issues. For example, he explained, data mining could uncover statistically significant, but invalid relationships between market performance and various external factors. Milk production and technology company stock values could rise at the same time but still have nothing to do with each other.
Among its other benefits, the CIFT project is designed to reveal significant relationships between data sets.
Intelligent Exoskeleton Helps Paraplegics Walk
Mark Ingebretsen

People with impaired mobility might soon have an alternative to a wheelchair thanks to a mechanized exoskeleton called eLEGS, the first device of its kind that is fully mobile and doesn't need to be plugged in to an electrical outlet.
Developed by Berkeley Bionics ( http://berkeleybionics.com), eLEGS adapts to individual users' movements as it assists them in walking, due in large part to the system's AI capabilities. Crutches embedded with motion-detection sensors help wearers maintain their balance while walking. The motion-detecting sensors embedded in the crutches provide feedback on the user's stride and speed to a computer worn as a backpack.
eLEGS measures a person's gait and, using AI, discerns a wearer's desired movements by recognizing the angles of each person's arms, the crutches' angles, and the force exerted on each crutch.
The exoskeleton's human-machine interface consists of two levels. "On one level, the machine interface senses a user's desired action," noted Berkeley Bionics Vice President of Engineering John Fogelin. At another level, he explained, to keep the user from falling, the interface "manages the actual control that goes into performing a single stride."
The key mechanical challenge was simplifying the device's mechanical stride to limit the use of the system's motor and thereby conserve power. The eLEGS battery, which accounts for 20 percent of the exoskeleton's weight, lasts up to six hours.
At the same time, the device had to duplicate a person's walking movements so that the user and system wouldn't work against each other, Fogelin said. To accomplish this, Berkeley Bionics developed an advanced, AI-based algorithm to control the motor.
Future software developments will focus on adapting eLEGS to new environments. "The real world has ramps, curbs, stairs, mixed surfaces, and so forth," Fogelin explained. "We are tuning our algorithms to handle these challenges."
The algorithms could interpret the terrain ahead and adjust the device's stride accordingly. They also might sense that a user is becoming fatigued and provide more mechanical support.
"Our design philosophy is that eLEGS should adapt to the environment it is in without the requirement of preprogramming," explained Fogelin. "Therefore, our greatest challenge is anticipating all possible impediments."
Berkeley Bionics has developed assisted-mobility devices for DARPA. In 2008, the company introduced a third-generation exoskeleton device called the Human Universal Load Carrier (HULC, www.lockheedmartin.com/products/hulc), designed to help soldiers carry heavy gear over difficult terrain.
eLEGS' initial capabilities are more modest. Berkeley Bionics plans to introduce the device in the next few months for clinical trials at select rehabilitative centers. The exoskeleton's trial version will let users sit, stand up from a seated position, walk in a straight line at a normal pace, and change directions while striding at varying speeds.
Berkeley Bionics is looking at introducing additional capabilities. "We are actively researching new strategies to capture a user's [intended movements]," Fogelin explained, "while retaining a simplicity in design to minimize the learning curve for new users."
Gene Emmer, founder and president of Med Services Europe, a manufacturer of equipment for the disabled, said eLEGS and similar devices will have a major impact on people who cannot walk on their own. However, he added, the devices are expensive, so their use will be limited.