The Community for Technology Leaders
2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) (2018)
Hong Kong
Jul 8, 2018 to Jul 11, 2018
ISSN: 2159-3477
ISBN: 978-1-5386-7099-6
pp: 509-515
ABSTRACT
Deep neural networks have achieved impressive results in computer vision and machine learning. Unfortunately, state-of-the-art networks are extremely compute-and memory-intensive which makes them unsuitable for mW-devices such as IoT end-nodes. Aggressive quantization of these networks dramatically reduces the computation and memory footprint. Binary-weight neural networks (BWNs) follow this trend, pushing weight quantization to the limit. Hardware accelerators for BWNs presented up to now have focused on core efficiency, disregarding I/O bandwidth and system-level efficiency that are crucial for deployment of accelerators in ultra-low power devices. We present Hyperdrive: a BWN accelerator dramatically reducing the I/O bandwidth exploiting a novel binary-weight streaming approach, and capable of handling high-resolution images by virtue of its systolic-scalable architecture. We achieve a 5.9 TOp/s/W system-level efficiency (i.e. including I/Os)-2.2x higher than state-of-the-art BNN accelerators, even if our core uses resource-intensive FP16 arithmetic for increased robustness.
INDEX TERMS
computer vision, feedforward neural nets, Internet of Things, learning (artificial intelligence), low-power electronics, microprocessor chips
CITATION

R. Andri, L. Cavigelli, D. Rossi and L. Benini, "Hyperdrive: A Systolically Scalable Binary-Weight CNN Inference Engine for mW IoT End-Nodes," 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), Hong Kong, 2018, pp. 509-515.
doi:10.1109/ISVLSI.2018.00099
397 ms
(Ver 3.3 (11022016))