The Community for Technology Leaders
2017 IEEE 24th International Conference on High Performance Computing (HiPC) (2017)
Jaipur, India
Dec 19, 2017 to Dec 22, 2017
ISBN: 978-1-5386-2293-3
pp: 183-192
ABSTRACT
Training Convolutional Neural Network (CNN) is a computationally intensive task whose parallelization has become critical in order to complete the training in an acceptable time. However, there are two obstacles to developing a scalable parallel CNN in a distributed-memory computing environment. One is the high degree of data dependency exhibited in the model parameters across every two adjacent minibatches and the other is the large amount of data to be transferred across the communication channel. In this paper, we present a parallelization strategy that maximizes the overlap of inter-process communication with the computation. The overlapping is achieved by using a thread per compute node to initiate communication after the gradients are available. The output data of backpropagation stage is generated at each model layer, and the communication for the data can run concurrently with the computation of other layers. To study the effectiveness of the overlapping and its impact on the scalability, we evaluated various model architectures and hyperparameter settings. When training VGG-A model using ImageNet data sets, we achieve speedups of 62.97× and 77.97× on 128 compute nodes using mini-batch sizes of 256 and 512, respectively.
INDEX TERMS
backpropagation, convolution, distributed memory systems, feedforward neural nets, parallel processing
CITATION

S. Lee, D. Jha, A. Agrawal, A. Choudhary and W. Liao, "Parallel Deep Convolutional Neural Network Training by Exploiting the Overlapping of Computation and Communication," 2017 IEEE 24th International Conference on High Performance Computing (HiPC), Jaipur, India, 2018, pp. 183-192.
doi:10.1109/HiPC.2017.00030
190 ms
(Ver 3.3 (11022016))