2016 Fourth International Symposium on Computing and Networking (2016)
Nov. 22, 2016 to Nov. 25, 2016
Neural networks or NNs are widely used for many machine learning applications such as image processing and speech recognition. Since general-purpose processors such as CPUs and GPUs are energy inefficient for computing NNs, application-specific hardware accelerators for NNs (a.k.a. neural network accelerators or NNAs) have been proposed to improve the energy efficiency. However, existing NNAs are too customized for computing specific NNs, and do not allow to change NN models or learning algorithms. This limitation prevents machine-learning researchers from exploiting NNAs, so we are developing a general-purpose NNA that has the capability to compute any NN. Our NNA equips with reconfigurable logic in addition to various custom logics, which is called reconfigurable NNA or RNNA. RNNA is highly tuned for the NN computation but allows end users to customize the hardware to compute their desired NN. This paper introduces the RNNA architecture, and reports the performance analysis of RNNA with a cycle-level simulator.
Artificial neural networks, Neurons, Computer architecture, Biological neural networks, Graphics processing units, Computational modeling
M. Ohba, S. Shindo, S. Miwa, T. Tsumura, H. Yamaki and H. Honda, "Initial Study of Reconfigurable Neural Network Accelerators," 2016 Fourth International Symposium on Computing and Networking(CANDAR), Hiroshima, Japan, 2016, pp. 707-709.