The Community for Technology Leaders
2015 International Conference on Parallel Architecture and Compilation (PACT) (2015)
San Francisco, CA, USA
Oct. 18, 2015 to Oct. 21, 2015
ISSN: 1089-795X
ISBN: 978-1-4673-9524-3
pp: 99-112
ABSTRACT
Machine learning is becoming pervasive, decades of research in neural network computation is now being leveraged to learn patterns in data and perform computations that are difficult to express using standard programming approaches. Recent work has demonstrated that custom hardware accelerators for neural network processing can outperform software implementations in both performance and power consumption. However, there is neither an agreed-upon interface to neural network accelerators nor a consensus on neural network hardware implementations. We present a generic set of software/hardware extensions, X-FILES, that allow for the general-purpose integration of feedforward and feedback neural network computation in applications. The interface is independent of the network type, configuration, and implementation. Using these proposed extensions, we demonstrate and evaluate an example dynamically allocated, multi-context neural network accelerator architecture, DANA. We show that the combination of X-FILES and our hardware prototype, DANA, enables generic support and increased throughput for neural-network-based computation in multi-threaded scenarios.
INDEX TERMS
Artificial neural networks, Hardware, Software, Registers, Standards, Accelerator architectures
CITATION
Schuyler Eldridge, Amos Waterland, Margo Seltzer, Jonathan Appavoo, Ajay Joshi, "Towards General-Purpose Neural Network Computing", 2015 International Conference on Parallel Architecture and Compilation (PACT), vol. 00, no. , pp. 99-112, 2015, doi:10.1109/PACT.2015.21
79 ms
(Ver 3.3 (11022016))