The Community for Technology Leaders
2018 IEEE 29th International Conference on Application-specific Systems, Architectures and Processors (ASAP) (2018)
Milano, Italy
July 10, 2018 to July 12, 2018
ISSN: 2160-052X
ISBN: 978-1-5386-7480-2
pp: 1-8
Ruizhe Zhao , Imperial College London, London, United Kingdom
Shuanglong Liu , Imperial College London, London, United Kingdom
Ho-Cheung Ng , Imperial College London, London, United Kingdom
Erwei Wang , Imperial College London, London, United Kingdom
James J. Davis , Imperial College London, London, United Kingdom
Xinyu Niu , Corerain Technologies Ltd., Shenzhen, China
Xiwei Wang , China Academy of Space Technology, Beijing, China
Huifeng Shi , State Key Laboratory of Space-Ground Integrated Information Technology (SGIIT), Beijing, China
George A. Constantinides , Imperial College London, London, United Kingdom
Peter Y. K. Cheung , Imperial College London, London, United Kingdom
Wayne Luk , Imperial College London, London, United Kingdom
ABSTRACT
Deploying a deep neural network model on a reconfigurable platform, such as an FPGA, is challenging due to the enormous design spaces of both network models and hardware design. A neural network model has various layer types, connection patterns and data representations, and the corresponding implementation can be customised with different architectural and modular parameters. Rather than manually exploring this design space, it is more effective to automate optimisation throughout an end-to-end compilation process. This paper provides an overview of recent literature proposing novel approaches to achieve this aim. We organise materials to mirror a typical compilation flow: front end, platform-independent optimisation and back end. Design templates for neural network accelerators are studied with a specific focus on their derivation methodologies. We also review previous work on network compilation and optimisation for other hardware platforms to gain inspiration regarding FPGA implementation. Finally, we propose some future directions for related research.
INDEX TERMS
Hardware, Optimization, Computational modeling, DSL, Field programmable gate arrays, Space exploration, Software
CITATION

R. Zhao et al., "Hardware Compilation of Deep Neural Networks: An Overview," 2018 IEEE 29th International Conference on Application-specific Systems, Architectures and Processors (ASAP), Milano, Italy, 2018, pp. 1-8.
doi:10.1109/ASAP.2018.8445088
87 ms
(Ver 3.3 (11022016))