The Community for Technology Leaders
2016 IEEE International Conference on Multimedia and Expo (ICME) (2016)
Seattle, WA, USA
July 11, 2016 to July 15, 2016
ISSN: 1945-788X
ISBN: 978-1-4673-7259-6
pp: 1-6
Jie Lei , Zhejiang University, Hangzhou, 310027, P.R. China
Xinhui Song , Zhejiang University, Hangzhou, 310027, P.R. China
Li Sun , Zhejiang University, Hangzhou, 310027, P.R. China
Mingli Song , Zhejiang University, Hangzhou, 310027, P.R. China
Na Li , Zhejiang International Studies University, Hangzhou, 310027, P.R. China
Chun Chen , Zhejiang University, Hangzhou, 310027, P.R. China
ABSTRACT
Visual separability between different objects in various image classification tasks is highly uneven. As a consequence, humans need different levels of detailed descriptions to separate objects in multi-granularity similarities. Meanwhile, deep networks, such as convolutional neural networks (C-NNs) have demonstrated great ability in multilevel representations for an object. Unfortunately, existing methods with deep networks in classification typically use the output of the last layer as the only feature to train flat N-way classifiers, which fail to fit the multi-granularity character. In this paper, by regarding different CNN layers as multiple levels of abstraction, we propose a deep decision tree (DDT) to distinguish objects sharing great appearance similarities with utilizing features in all layers. First, deep features in multiple layers are extracted from deep networks as the input for building a DDT. Next, in the training phase, the features from earlier layers are selected for splitting on a deeper node. Finally, multiple DDTs are bagged to make the final prediction by taking the majority vote. The experimental results in two datasets show DDT can greatly improve the classification accuracy in multi-grained tasks than flat models.
INDEX TERMS
Decision trees, Feature extraction, Training, Visualization, Buildings, Cats, Image color analysis
CITATION

J. Lei, X. Song, L. Sun, M. Song, N. Li and C. Chen, "Learning deep classifiers with deep features," 2016 IEEE International Conference on Multimedia and Expo (ICME), Seattle, WA, USA, 2016, pp. 1-6.
doi:10.1109/ICME.2016.7552910
83 ms
(Ver 3.3 (11022016))