A Cost-Effective Framework for Automated Vehicle-Pedestrian Near-Miss Detection Through Onboard Monocular Vision
2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
Honolulu, Hawaii, USA
July 21, 2017 to July 26, 2017
Onboard monocular cameras have been widely deployed in both public transit and personal vehicles. Obtaining vehicle-pedestrian near-miss event data from onboard monocular vision systems may be cost-effective compared with onboard multiple-sensor systems or traffic surveillance videos. But extracting near-misses from onboard monocular vision is challenging and little work has been published. This paper fills the gap by developing a framework to automatically detect vehicle-pedestrian near-misses through onboard monocular vision. The proposed framework can estimate depth and real-world motion information through monocular vision with a moving video background. The experimental results based on processing over 30-hours video data demonstrate the ability of the system to capture near-misses by comparison with the events logged by the Rosco/MobilEye Shield+ system which includes four cameras working cooperatively. The detection overlap rate reaches over 90% with the thresholds properly set.
Videos, Cameras, Safety, Sensors, Feature extraction, Tracking, Surveillance
R. Ke, J. Lutin, J. Spears and Y. Wang, "A Cost-Effective Framework for Automated Vehicle-Pedestrian Near-Miss Detection Through Onboard Monocular Vision," 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, Hawaii, USA, 2017, pp. 898-905.