The Community for Technology Leaders
2017 IEEE International Conference on Computer Vision Workshop (ICCVW) (2017)
Venice, Italy
Oct. 22, 2017 to Oct. 29, 2017
ISSN: 2473-9944
ISBN: 978-1-5386-1034-3
pp: 996-1004
Fine-grained classification of objects such as vehicles, natural objects and other classes is an important problem in visual recognition. It is a challenging task because small and localized differences between similar looking objects indicate the specific fine-grained label. At the same time, accurate classification needs to discount spurious changes in appearance caused by occlusions, partial views and proximity to other clutter objects in scenes. A key contributor to fine-grained recognition are discriminative parts and regions of objects. Past work has often attempted to solve the problems of classification and part localization separately resulting in complex models and ad-hoc algorithms, leading to low performance in accuracy and processing time. We propose a novel multi-task deep network architecture that jointly optimizes both localization of parts and fine-grained class labels by learning from training data. The localization and classification sub-networks share most of the weights, yet have dedicated convolutional layers to capture finer level class specific information. We design our model as memory and computational efficient so that can be easily embedded in mobile applications. We demonstrate the effectiveness of our approach through experiments that achieve a new state-of-the-art 93.1% performance on the Stanford Cars-196 dataset, with a significantly smaller multi-task network (30M parameters) and significantly faster testing speed (78 FPS) compared to recent published results.
Training, Computer architecture, Birds, Convolution, Switches, Automobiles

X. Dai, B. Southall, N. Trinh and B. Matei, "Efficient Fine-Grained Classification and Part Localization Using One Compact Network," 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), Venice, Italy, 2017, pp. 996-1004.
94 ms
(Ver 3.3 (11022016))