The Community for Technology Leaders
2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
Honolulu, Hawaii, USA
July 21, 2017 to July 26, 2017
ISSN: 2160-7516
ISBN: 978-1-5386-0733-6
pp: 455-462
ABSTRACT
The emergence of Deep neural networks has seen human-level performance on large scale computer vision tasks such as image classification. However these deep networks typically contain large amount of parameters due to dense matrix multiplications and convolutions. As a result, these architectures are highly memory intensive, making them less suitable for embedded vision applications. Sparse Computations are known to be much more memory efficient. In this work, we train and build neural networks which implicitly use sparse computations. We introduce additional gate variables to perform parameter selection and show that this is equivalent to using a spike-and-slab prior. We experimentally validate our method on both small and large networks which result in highly sparse neural network models.
INDEX TERMS
Logic gates, Indexes, Sparse matrices, Training, Complexity theory, Biological neural networks
CITATION
Suraj Srinivas, Akshayvarun Subramanya, R. Venkatesh Babu, "Training Sparse Neural Networks", 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), vol. 00, no. , pp. 455-462, 2017, doi:10.1109/CVPRW.2017.61
99 ms
(Ver 3.3 (11022016))