The Community for Technology Leaders
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Honolulu, Hawaii, USA
July 21, 2017 to July 26, 2017
ISSN: 1063-6919
ISBN: 978-1-5386-0457-1
pp: 6100-6108
ABSTRACT
Deep networks have shown impressive performance on many computer vision tasks. Recently, deep convolutional neural networks (CNNs) have been used to learn discriminative texture representations. One of the most successful approaches is Bilinear CNN model that explicitly captures the second order statistics within deep features. However, these networks cut off the first order information flow in the deep network and make gradient back-propagation difficult. We propose an effective fusion architecture - FASON that combines second order information flow and first order information flow. Our method allows gradients to back-propagate through both flows freely and can be trained effectively. We then build a multi-level deep architecture to exploit the first and second order information within different convolutional layers. Experiments show that our method achieves improvements over state-of-the-art methods on several benchmark datasets.
INDEX TERMS
computer vision, feature extraction, feedforward neural nets, image fusion, image representation, image texture, learning (artificial intelligence)
CITATION

X. Dai, J. Y. Ng and L. S. Davis, "FASON: First and Second Order Information Fusion Network for Texture Recognition," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, USA, 2017, pp. 6100-6108.
doi:10.1109/CVPR.2017.646
281 ms
(Ver 3.3 (11022016))