The Community for Technology Leaders
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Honolulu, Hawaii, USA
July 21, 2017 to July 26, 2017
ISSN: 1063-6919
ISBN: 978-1-5386-0457-1
pp: 6005-6013
ABSTRACT
Zero-shot learning for visual recognition has received much interest in the most recent years. However, the semantic gap across visual features and their underlying semantics is still the biggest obstacle in zero-shot learning. To fight off this hurdle, we propose an effective Low-rank Embedded Semantic Dictionary learning (LESD) through ensemble strategy. Specifically, we formulate a novel framework to jointly seek a low-rank embedding and semantic dictionary to link visual features with their semantic representations, which manages to capture shared features across different observed classes. Moreover, ensemble strategy is adopted to learn multiple semantic dictionaries to constitute the latent basis for the unseen classes. Consequently, our model could extract a variety of visual characteristics within objects, which can be well generalized to unknown categories. Extensive experiments on several zero-shot benchmarks verify that the proposed model can outperform the state-of-the-art approaches.
INDEX TERMS
feature extraction, image representation, learning (artificial intelligence), semantic networks
CITATION

Z. Ding, M. Shao and Y. Fu, "Low-Rank Embedded Ensemble Semantic Dictionary for Zero-Shot Learning," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, USA, 2017, pp. 6005-6013.
doi:10.1109/CVPR.2017.636
190 ms
(Ver 3.3 (11022016))