Issue No.12 - December (2006 vol.39)
Published by the IEEE Computer Society
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MC.2006.412
VisLab and the Evolution of Vision-Based UGVs
Massimo Bertozzi, Alberto Broggi, and Alessandra Fascioli
VisLab researchers have been working on developing unmanned ground vehicles for 15 years, closely following worldwide advancements in the field and setting milestones in the history of intelligent vehicles. Their accomplishments include the development of TerraMax, an autonomous vehicle that reached the finish line of the 2005 DARPA Grand Challenge. TerraMax, which uses artificial vision, laser scanners, GPS, inertial sensors, and map databases to sense and understand its environment, has three-camera system that allows precision and efficient computation at a wide range of viewing distances.
Perception and Planning Architecture for Autonomous Ground Vehicles
Bob Touchton, Tom Galluzzo, Danny Kent, and Carl Crane
Team Cimar, a finalist in the 2004 and 2005 DARPA Grand Challenges, was a collaboration of the University of Florida's Center for Intelligent Machines and Robotics and several private companies. To prepare for these driverless competitions, the software engineering subteam designed and deployed a standardized software architecture, with accompanying software tools and libraries.
The team incorporated in its custom-built off-road autonomous ground vehicle key components such as six smart sensors for detecting environmental conditions and reporting a priori data, a smart arbiter for fusing data from multiple smart sensors, and a reactive driver to provide real-time navigation planning and obstacle avoidance.
Testing Driver Skill for High-Speed Autonomous Vehicles
Chris Urmson, William "Red" Whittaker, Sam Harbaugh, Michael Clark, and Phillip Koon
Carnegie Mellon University's Red Team developed robots that used a combination of autonomous and human preplanning to become two of only four vehicles to complete the 2005 DARPA Grand Challenge. The robots used onboard sensors to adjust a preplanned route to avoid obstacles and correct for position-estimation errors. To be this successful, CMU's researchers had to develop innovative algorithms and systems and test them rigorously to verify performance.
To Drive Is Human
Isaac Miller, Ephrahim Garcia, and Mark Campbell
As researchers watched the dust-covered robots triumphantly roll across the finish line in the 2005 DARPA Grand Challenge, they couldn't help but appreciate the hard work that went into writing this latest chapter of man and machine. Given these accomplishments, however, it's natural to pose the question: Why don't we have robots that chauffeur us to work, taxi us home from the airport, or make that long drive to grandma's house?
On the Importance of Being Contextual
Paolo Lombardi, Bertrand Zavidovique, and Michael Talbert
Research investigating sensory-based vehicle control systems points to the promise of multimodal awareness as the key to improved performance across a broad spectrum of future mission spaces.
One postulated solution involves the introduction of, and accounting for, context cues that might be known to be present or expected to appear as a transient in a given environment. The solution also can incorporate heuristics regarding the processing of elements from alternative sensing modalities.
Memory-Based In Situ Learning for Unmanned Vehicles
Patrick McDowell, Brian S. Bourgeois, Donald A. Sofge, and S.S. Iyengar
The authors seek to provide teams of unmanned underwater vehicles (UUVs) that can adapt to their environment without requiring exhaustive trial-and-error testing or complex environmental modeling.
This work focuses on UUVs because they could make dangerous tasks such as searching for underwater hazards or surveying the ocean bottom safer and less costly for government and commercial operations. Ultimately, the authors seek to develop a robot team that can learn its roles and improve team strategies in dynamic unstructured environments such as underwater or urban settings that make communications and monitoring difficult.
A Vision for Supporting Autonomous Navigation in Urban Environments
Vason P. Srini
Future vehicles operating in diverse environments are expected to have computer controls for throttle, steering, and brakes to support collision avoidance, adaptive cruising, automatic parking, and safe driving. This will require advances that cut costs and improve reliability.
These autonomous navigation systems perform three basic functions: context gathering using sensors, processing, and action. The systems use local context to make tactical decisions in real time, such as slowing down when the vehicle ahead brakes or when the distance to a stop sign gets shorter.