This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Scalable Parallel Formulation of the Backpropagation Algorithm for Hypercubes and Related Architectures
October 1994 (vol. 5 no. 10)
pp. 1073-1090

We present a new technique for mapping the backpropagation algorithm on hypercubeand related architectures. A key component of this technique is a network partitioningscheme called checkerboarding. Checkerboarding allows us to replace the all-to-allbroadcast operation performed by the commonly used vertical network partitioningscheme, with operations that are much faster on the hypercubes and relatedarchitectures. Checkerboarding can be combined with the pattern partitioning techniqueto form a hybrid scheme that performs better than either one of these schemes.Theoretical analysis and experimental results on nCUBE and CM5 show that our schemeperforms better than the other schemes, for both uniform and nonuniform networks.

[1] D.E. Rumelhart, G.E. Hinton, and R. J. Williams,Learning Internal Representations by Error Propagation, ch. 8. Cambridge, MA: MIT, 1986.
[2] T. J. Sejnowski and C. R. Rosenburg, "Nettalk: A parallel network that learns to read aloud," Tech. Rep. JHU/EECS-86/01, Dept. of Elec. Eng. and Comput. Sci., Johns Hopkins Univ., Baltimore, MD, USA, 1986.
[3] S. Shekhar and M. B. Amin, "Generalization performance of feedforward neural networks,"IEEE Trans. Knowl. Data Eng., vol. 4, pp. 177-185, Apr. 1992.
[4] S. Shekhar and S. Dutta, "Bond rating: A non-conservative application of neural network," inProc. IEEE Int. Conf. Neural Networks, San Diego, CA, July 1988.
[5] H. White, "Economic prediction using neural networks: The case of IBM daily stock returns," inIEEE Int. Conf. Neural Networks, 1988, pp. 451-458.
[6] W. Allen and A. Saha, "Parallel neural network simulation using backpropagation for the es-kit environment," inProc. 1989 Conf. Hypercubes, Concurrent Computers and Applications, 1989, pp. 1097-1102.
[7] W. M. Lin, V. K. Prasanna, and K. W. Przytula, "Algorithmic mapping of neural network models onto parallel SIMD machines,'IEEE Trans. Comput., vol. 40, pp. 1390-1401, Dec. 1991.
[8] G. Blelloch and C. R. Rosenberg, "Network learning on the connection machine," Tech. Rep., MIT, Cambridge, MA, USA, Nov. 1986.
[9] H. Yoon and J.H. Nang, "Multilayer neural networks on distributed-memory multiprocessors," inProc. Int. Conf. Neural Networks (IEEE/EEC), 1991, pp. 669-672.
[10] H. Yoon and J.H. Nang, "A distributed backpropagation algorithm of neural networks on distributed-memory multiprocessors," inProc. Int. Conf. Parallel Processing, 1991, pp. 358-363.
[11] X. Zhang, "An efficient implementation of the backpropagation algorithm on the connection machine CM-2," Tech. Rep. RL89-1, Thinking Machines Corp., Aug. 1989.
[12] X. Zhang and M. McKenna, "The backpropagation algorithm on grid and hypercube architectures," Tech. Rep. RL90-9, Thinking Machines Corp., 1990.
[13] F. Baiardi, R. Mussard, R. Serr, and G. Valastro, "Feedforward neural networks on message passing parallel computers," in E.R. Caianielo, Ed.,Proc. 2nd Italian Workshop on Parallel Architectures and Neural Networks. Singapore: World Scientific, 1990.
[14] M. Marchesi, G. Orlandi, F. Piazza, and A. Uncini, "Linear array architecture implementing the backpropagation neural network," in E. R. Caianielo, Ed.,Proc. 2nd Italian Workshop on Parallel Architectures and Neural Networks. Singapore: World Scientific, 1990.
[15] D. S. Newhall and J. C. Horvath, "Analysis of text using a neural network: A hypercube implementation," inProc. 1989 Conf. Hypercubes, Concurrent Computers and Applic., 1989, pp. 1119-1122.
[16] S. Y. Kung, and J. N. Hwang, "A unified modeling of connectionist neural networks,"J. Parallel Distributed Comput., vol. 6, pp. 358-387, 1989.
[17] K. Joe, Y. Mori, and S. Miyake, "Simulation of a large-scale neural network on a parallel computer," inProc. 1989 Conf. Hypercubes, Concurrent Comput. Applic., 1989, pp. 1111-1118.
[18] M. Witbrock and M. Zagha, "An implementation of back-propagation learning on GF11: A large SIMD parallel computer," Tech. Rep. CMU-CS-89-208, Carnegie Mellon Univ., Pittsburgh, PA, USA, Dec. 1989.
[19] J. Bourrley, "Parallelization of a neural learning algorithm," in F. Andre, Ed.,Proc. 1st Eur. Workshop on Hybercube and Distributed Computers. Amsterdam, Netherlands: North-Holland, 1989.
[20] B. K. Mak and O. Egecioglu, "Communication parameter tests and par allel backpropagation algorithms on iPSC/2 hypercube multiprocessor," inIEEE Frontier, 1990, pp. 1353-1364.
[21] A. Petrowski, L. Personnaz, G. Dreyfus, and C. Girault, "Parallel implementations of neural network simulations," in1st Eur. Workshop Hypercube and Distrib. Comput.Amsterdam, Netherlands: Elsevier (North-Holland), 1989, pp. 205-218.
[22] N. Morgan, J. Beck, P. Kohn, and J. Bilmes, "Neurocomputing on the RAP," in K. W. Przytula and V. K. Prasanna, Eds.,Parallel Digital Implementations of Neural Networks. Englewood Cliffs, NJ: Prentice-Hall, 1993.
[23] D. A. Pomerleau, G. L. Gusciora, D. S. Touretzky, and H. T. Kung, "Neural network simulation at warp speed: How we got 17 million connections per second," inProc. Int. Conf. Neural Networks, San Diego, CA, June 1988.
[24] V. Kumar et al.,Introduction to Parallel Computing: Design and Analysis of Parallel Algorithms, Benjamin Cummings, Redwood City, Calif., 1994.
[25] V. Kumar, S. Shekhar, and M. B. Amin, "A highly parallel formulation of backpropagation on hypercubes: A summary of results," inProc. Int. Conf. Neural Networks(IJCNN), Nov. 1992.
[26] W. J. Leinberger, "Vectorized checkerboarding formulation of the back-propagation algorithm for CM-5," M.S. thesis, Tech. Rep., Comput. Sci. Dept., Univ. of Minnesota, Minneapolis, MN, USA, 1993.
[27] J. McClelland and D. Rumelhart,Explorations in Parallel Distributed Processing. Cambridge, MA: MIT Press, 1988.
[28] G. L. Wilcox, M. Poliac, and M. Liebman, "Protein tertiary structure prediction using a large backpropagation network," inIJCNN (IEEE/EEC), 1990, pp. 365-369.
[29] B. W. Wah and L. Chu, "Efficient mapping of neural networks on multicomputers,"IEEE Int. Conf. Parallel Processing, 1990, pp. 234-241.
[30] M. Misra and V. K. Prasanna Kumar, "Neural network simulation on a reduced mesh of tree organization," inSPIE/SPSE Symp. Electron. Images, 1990.
[31] M. Misra, "Implementation of neural networks on parallel architectures," in Tech. Rep. 295, Dept. of Elec. Eng., Univ. of Southern California, 1992.
[32] J. Ghosh and K. Hwang, "Mapping neural networks onto message passing multicomputers,"J. Parallel Distrib. Computing, Apr. 1989.
[33] A. Gupta and V. Kumar, "On the scalability of matrix multiplication algorithms on parallel computers," Tech. Rep. TR 91-54, Comput. Sci. Dept., Univ. of Minnesota, Minneapolis, MN, USA, Sept. 1992.
[34] D. P. Bertsekas and J. N. Tsitsiklis,Parallel and Distributed Computations. Englewood Cliffs, NJ: Prentice-Hall, 1989.
[35] S. Lennart Johnsson and C.-T. Ho, "Optimum broadcasting and personalized communication in hypercubes,"IEEE Trans. Comput., vol. 38, pp. 1249-1268, Sept. 1989.
[36] Y. Saad and M.H. Schultz, "Topological properties of hypercubes,"IEEE Trans. Comput., vol. 37, pp. 867-872, 1988.
[37] J. Jenq and S. Sahni, "All pairs shortest paths on a hypercube multiprocessor," inInt. Conf. Parallel Processing, 1987, pp. 713-716.
[38] V. Kumar and A. Gupta, "Analyzing scalability of parallel algorithms and architectures,"J. parallel Distrib. Computing, 1994.
[39] A. Grama, A. Gupta, and V. Kumar, "Isoefficiency function: A scalability metric for parallel algorithms and architectures,"IEEE Parallel Distrib. Technol., 1993, pp. 12-21.
[40] A. Gupta and V. Kumar, "The scalability of FFT on parallel computers,"IEEE Trans. Parallel Distrib. Syst., vol. 4, pp. 922-932, Aug. 1993.
[41] V. Kumar and V. Singh, "Scalability of Parallel Algorithms for the All-Pairs Shortest-Path Problem,"J. Parallel and Distributed Computing, Vol. 13, No. 2, Oct. 1991, pp. 124-138.
[42] Ranka, S., and S. Sahni,Hypercube Algorithms for Image Processing and Pattern Recognition, Springer-Verlag, Berlin, 1990.
[43] V. Singh, G. Agha, V. Kumar, and C. Tomlinson, "Scalability of parallel sorting on mesh multicomputers," inProc. 5th Int. Parall. Processing Symp., 1991.
[44] D. J. Kuck, "A survey of parallel machine organization and programming,"ACM Computing Surv., vol. 9, no. 1, Mar. 1977.

Index Terms:
Index Termsbackpropagation; hypercube networks; neural nets; parallel architectures; parallelmachines; parallel algorithms; scalable parallel formulation; backpropagation algorithm;hypercubes; network partitioning scheme; checkerboarding; all-to-all broadcastoperation; vertical network partitioning scheme; pattern partitioning technique; hybridscheme; performance evaluation; nCUBE; CM5; nonuniform networks; uniform networks; neural networks
Citation:
V. Kumar, S. Shekhar, M.B. Amin, "A Scalable Parallel Formulation of the Backpropagation Algorithm for Hypercubes and Related Architectures," IEEE Transactions on Parallel and Distributed Systems, vol. 5, no. 10, pp. 1073-1090, Oct. 1994, doi:10.1109/71.313123
Usage of this product signifies your acceptance of the Terms of Use.