The Community for Technology Leaders
2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
Honolulu, Hawaii, USA
July 21, 2017 to July 26, 2017
ISSN: 2160-7516
ISBN: 978-1-5386-0733-6
pp: 898-905
ABSTRACT
Onboard monocular cameras have been widely deployed in both public transit and personal vehicles. Obtaining vehicle-pedestrian near-miss event data from onboard monocular vision systems may be cost-effective compared with onboard multiple-sensor systems or traffic surveillance videos. But extracting near-misses from onboard monocular vision is challenging and little work has been published. This paper fills the gap by developing a framework to automatically detect vehicle-pedestrian near-misses through onboard monocular vision. The proposed framework can estimate depth and real-world motion information through monocular vision with a moving video background. The experimental results based on processing over 30-hours video data demonstrate the ability of the system to capture near-misses by comparison with the events logged by the Rosco/MobilEye Shield+ system which includes four cameras working cooperatively. The detection overlap rate reaches over 90% with the thresholds properly set.
INDEX TERMS
Videos, Cameras, Safety, Sensors, Feature extraction, Tracking, Surveillance
CITATION
Ruimin Ke, Jerome Lutin, Jerry Spears, Yinhai Wang, "A Cost-Effective Framework for Automated Vehicle-Pedestrian Near-Miss Detection Through Onboard Monocular Vision", 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), vol. 00, no. , pp. 898-905, 2017, doi:10.1109/CVPRW.2017.124
90 ms
(Ver 3.3 (11022016))