The Community for Technology Leaders
2013 IEEE Conference on Computer Vision and Pattern Recognition (2013)
Portland, OR, USA USA
June 23, 2013 to June 28, 2013
ISSN: 1063-6919
pp: 3190-3197
In recent years, efficiency of large-scale object detection has arisen as an important topic due to the exponential growth in the size of benchmark object detection datasets. Most current object detection methods focus on improving accuracy of large-scale object detection with efficiency being an afterthought. In this paper, we present the Efficient Maximum Appearance Search (EMAS) model which is an order of magnitude faster than the existing state-of-the-art large-scale object detection approaches, while maintaining comparable accuracy. Our EMAS model consists of representing an image as an ensemble of densely sampled feature points with the proposed Point wise Fisher Vector encoding method, so that the learnt discriminative scoring function can be applied locally. Consequently, the object detection problem is transformed into searching an image sub-area for maximum local appearance probability, thereby making EMAS an order of magnitude faster than the traditional detection methods. In addition, the proposed model is also suitable for incorporating global context at a negligible extra computational cost. EMAS can also incorporate fusion of multiple features, which greatly improves its performance in detecting multiple object categories. Our experiments show that the proposed algorithm can perform detection of 1000 object classes in less than one minute per image on the Image Net ILSVRC2012 dataset and for 107 object classes in less than 5 seconds per image for the SUN09 dataset using a single CPU.

Z. Huang et al., "Efficient Maximum Appearance Search for Large-Scale Object Detection," 2013 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Portland, OR, USA USA, 2013, pp. 3190-3197.
97 ms
(Ver 3.3 (11022016))