Bridging the semantic gaps of GPU acceleration for scale-out CNN-based big data processing: Think big, see small
2016 International Conference on Parallel Architecture and Compilation Techniques (PACT) (2016)
Sept. 11, 2016 to Sept. 15, 2016
Mingcong Song , Department of Electrical and Computer Engineering, University of Florida, Gainesville, USA
Yang Hu , Department of Electrical and Computer Engineering, University of Florida, Gainesville, USA
Yunlong Xu , School of Electronic and Information Engineering, Xi'an Jiaotong University, China
Chao Li , Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
Huixiang Chen , Department of Electrical and Computer Engineering, University of Florida, Gainesville, USA
Jingling Yuan , Wuhan University of Technology, China
Tao Li , Department of Electrical and Computer Engineering, University of Florida, Gainesville, USA
Convolutional Neural Networks (CNNs) have substantially advanced the state-of-the-art accuracies of object recognition, which is the core function of a myriad of modern multimedia processing techniques such as image/video processing, speech recognition, and natural language processing. GPU-based accelerators gained increasing attention because a large amount of highly parallel neurons in CNN naturally matches the GPU computation pattern. In this work, we perform comprehensive experiments to investigate the performance bottlenecks and overheads of current GPU acceleration platform for scale-out CNN-based big data processing. In our characterization, we observe two significant semantic gaps: framework gap that lies between CNN-based data processing workflow and data processing manner in distributed framework; and the standalone gap that lies between the uneven computation loads at different CNN layers and fixed computing capacity provisioning of current GPU acceleration library. To bridge these gaps, we propose D3NN, a Distributed, Decoupled, and Dynamically tuned GPU acceleration framework for modern CNN architectures. In particular, D3NN features a novel analytical model that enables accurate time estimation of GPU accelerated CNN processing with only 5–10% error. Our evaluation results show the throughput of standalone processing node using D3NN gains up to 3.7× performance improvement over current standalone GPU acceleration platform. Our CNN-oriented GPU acceleration library with built-in dynamic batching scheme achieves up to 1.5× performance improvement over the non-batching scheme and outperforms the state-of-the-art deep learning library by up to 28% (performance mode) ∼ 67% (memory-efficient mode).
Graphics processing units, Acceleration, Libraries, Semantics, Big data, Computer architecture
M. Song et al., "Bridging the semantic gaps of GPU acceleration for scale-out CNN-based big data processing: Think big, see small," 2016 International Conference on Parallel Architecture and Compilation Techniques (PACT), Haifa, Israel, 2016, pp. 315-326.