The Community for Technology Leaders
Green Image
Issue No. 11 - Nov. (2013 vol. 35)
ISSN: 0162-8828
pp: 2608-2623
M. Zeeshan Zia , Photogrammetry & Remote Sensing Lab., ETH Zurich, Zurich, Switzerland
M. Stark , Dept. of Comput. Sci., Stanford Univ., Stanford, CA, USA
B. Schiele , Comput. Vision & Multimodal Comput. Lab., Max-Planck-Inst. fur Inf., Saarbrucken, Germany
K. Schindler , Photogrammetry & Remote Sensing Lab., ETH Zurich, Zurich, Switzerland
ABSTRACT
Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.
INDEX TERMS
Three-dimensional displays, Solid modeling, Geometry, Shape, Computational modeling, Detectors, Design automation,ultrawide baseline matching, 3D representation, recognition, single image 3D reconstruction, scene understanding
CITATION
M. Zeeshan Zia, M. Stark, B. Schiele, K. Schindler, "Detailed 3D Representations for Object Recognition and Modeling", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 35, no. , pp. 2608-2623, Nov. 2013, doi:10.1109/TPAMI.2013.87
310 ms
(Ver )