The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - May (2014 vol.36)
pp: 1012-1025
Raquel Urtasun , , Toyota Technological Institute at Chicago, Chicago, IL, USA
ABSTRACT
In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.
INDEX TERMS
Roads, Vehicles, Layout, Three-dimensional displays, Semantics, Splines (mathematics), Hidden Markov models,Robotics, Autonomous vehicles, Scene Analysis, Image Processing and Computer Vision
CITATION
Raquel Urtasun, "3D Traffic Scene Understanding From Movable Platforms", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.36, no. 5, pp. 1012-1025, May 2014, doi:10.1109/TPAMI.2013.185
62 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool