Issue No. 06 - November/December (1998 vol. 13)
Driving-assistance systems are now being considered and investigated as the most important solution to the problem of mobility. By "driving-assistance system," I mean a device that supports the driver in the driving task: If the driver becomes ill or drowsy, the safety system can issue warnings—both acoustical or optical. Or, in case of sudden danger, the system can even take control of the vehicle. Depending on the user's choice, it could, for example, start an emergency maneuver and stop the vehicle in the emergency lane, or keep the vehicle in the traffic lane at a constant speed until the driver resumes control.
These devices can also provide automatic vehicle driving; that is, automating one or more driving tasks, such as
- following the road and keeping in the right lane,
- maintaining a safe distance between vehicles,
- regulating the vehicle's speed according to the traffic conditions and the road characteristics,
- moving across lanes to overtake vehicles and avoid obstacles,
- finding the correct and shortest route to a destination, or
- moving and parking in urban environments.
Both onboard equipment and road infrastructures can support these automatic activities. Either scenario has its pros and cons, depending on the specific application. However, infrastructure-based applications—which also require onboard equipment—generally need longer preparation time. So, the systems that will take shape in the short term will probably be vehicle-based or, at most, will be basic versions of infrastructure-based systems. But no matter what form these systems take, their success will depend strongly on their means of sensing the surrounding environment, among which vision-based sensing plays a fundamental role.
WHY VISION-BASED SENSING?
Thanks to increasingly powerful computer systems and less expensive high-performance image-acquisition devices, in the last few years vision-based sensing has gained in popularity and importance. This gain is not only for military applications, but also in the civil field, particularly for automotive subsystems. Many research institutions, ranging from higher education, to government, to industry, are considering the advantages of using passive sensors such as cameras as the main means to gather information about the surrounding environment.
Although real-time image processing is computationally intensive, visual information has the specific advantage over other kinds of information obtained by active sensors (such as acoustic, laser, or radar-based sensors) of not causing intervehicle interference. Active sensors measure alterations of signals emitted by the sensors themselves. So, besides polluting the environment, they can cause interference if other vehicles have the same kind of sensors. Furthermore, they suffer from a wide variation in reflection ratios for many different reasons, such as the obstacles' shape or material. Also, the maximum signal level must comply with specific safety rules.
Automotive Tasks for Vision-based Systems
Computer vision is extremely complex and highly demanding, but it can deliver a great amount of information, making it a powerful means for sensing the environment. So, researchers are widely employing it to address many automotive tasks. These tasks include road following (automatic movement along a given path, which includes lane and obstacle detection), platooning (an automatic vehicle following a manually driven vehicle), vehicle overtaking, automatic parking, collision avoidance, and driver-status monitoring.
Accomplishing these tasks will require measuring different quantities or recognizing patterns before the closing of the control loop. These subtasks include
- determining the vehicle's position relative to the lane and checking for obstacles on the path or known road signs (for road following),
- recognizing specific vehicles' characteristics and computing the time-to-impact (for platooning),
- sensing multiple lanes and detecting obstacles (for vehicle overtaking and collision avoidance),
- measuring the distance among parked vehicles (for automatic parking), and
- determining the position and following the movements of the driver's eyes and head (for driver-status monitoring).
This special issue features a survey on the application of machine vision to intelligent transportation systems, and contains articles on topics ranging from off-road or urban navigation, to visibility estimation, to the fusion of visual information to data coming from different sensors, to low-level procedures for lane detection and real-time processing, to road-sign recognition.
I am indebted to the Editor-in-Chief of IEEE Intelligent Systems, Daniel E. O'Leary, for the opportunity he gave to present this extremely up-to-date and strategic subject. I also thank the Intelligent Systems staff for all the professional support I received while organizing this issue. Finally, I thank the many reviewers who provided constructive comments on the content and presentation of the articles, and—obviously—the authors, whose work is the real essence of this special issue.
Alberto Broggi is an associate professor at the Department of Computer and Systems Engineering, University of Pavia, Italy. His main interests are computer vision and computer architectures for unmanned vehicle navigation. He is the coordinator of the ARGO project, aimed at developing an autonomous vehicle prototype. From 1994 to 1998 he was a full researcher (assistant professor) at the University of Parma, where he got his PhD in information technology. Contact him at Dipartimento di Informatica e Sistemistica, Univ. of Pavia, Via Ferrata, 1, I-27100 Pavia, Italy; firstname.lastname@example.org; http://vision.unipv.it/~broggi.