This Article 
 Bibliographic References 
 Add to: 
Parallel Implementation of Back-Propagation Algorithm in Networks of Workstations
January 2005 (vol. 16 no. 1)
pp. 24-34

Abstract—This paper presents an efficient mapping scheme for the multilayer perceptron (MLP) network trained using back-propagation (BP) algorithm on network of workstations (NOWs). Hybrid partitioning (HP) scheme is used to partition the network and each partition is mapped on to processors in NOWs. We derive the processing time and memory space required to implement the parallel BP algorithm in NOWs. The performance parameters like speed-up and space reduction factor are evaluated for the HP scheme and it is compared with earlier work involving vertical partitioning (VP) scheme for mapping the MLP on NOWs. The performance of the HP scheme is evaluated by solving optical character recognition (OCR) problem in a network of ALPHA machines. The analytical and experimental performance shows that the proposed parallel algorithm has better speed-up, less communication time, and better space reduction factor than the earlier algorithm. This paper also presents a simple and efficient static mapping scheme on heterogeneous system. Using divisible load scheduling theory, a closed-form expression for number of neurons assigned to each processor in the NOW is obtained. Analytical and experimental results for static mapping problem on NOWs are also presented.

[1] Y.L. Murphey and Y. Luo, “Feature Extraction for a Multiple Pattern Classification Neural Network System,” Pattern Recognition Proc., vol. 2, pp. 220-223, 2002.
[2] M. Nikoonahad and D.C. Liu, “Medical Ultra Sound Imaging Using Neural Networks,” Electronic Letters, vol. 2, no. 6, pp. 18-23, 1990.
[3] “Explorations in the Micro Structure of the Cognition,” Parallel and Distributed Processing, D.E. Rumelhart and J.L. McClelland, eds. Cambridge, Mass.: MIT Press, 1986.
[4] T.J. Sejnowski and C.R. Rosenberg, “Parallel Networks that Learn to Pronounce English Text,” Complex Systems, vol. 1, pp. 145-168, 1987.
[5] K.S. Narendra and K. Parthasarathy, “Identification and Control of Dynamical Systems Using Neural Network,” IEEE Trans. Neural Networks, vol. 1, pp. 4-27, 1990.
[6] H. Yoon, J.H. Nang, and S.R. Maeng, “Parallel Simulation of Multilayered Neural Networks on Distributed-Memory Multiprocessors,” Microprocessing and Microprogramming, vol. 29, pp. 185-195, 1990.
[7] E. Deprit, “Implementing Recurrent Back-Propagation on the Connection Machines,” Neural Network, vol. 2, pp. 295-314, 1989.
[8] D.A. Pomerleau et al., “Neural Network Simulation at Warp Speed: How We Got 17 Million Connection Per Second,” Proc. IEEE Second Int'l Conf. Neural Networks II, vol. 3, pp. 119-137, 1989.
[9] J. Hicklin and H. Demuth, “Modeling Neural Networks on the MPP,” Proc. Second Symp. Frontiers of Massively Parallel Computation, pp. 39-42, 1988.
[10] J.A. Feldman et al., “Computing with Structured Connection Networks,” Comm. ACM, vol. 31, no. 2, pp. 170-187, 1998.
[11] B.K. Mak and U. Egecloglu, “Communication Parameter Test and Parallel Backpropagation on iPSC/2 Hypercube Multiprocessor,” IEEE Frontier, pp. 1353-1364, 1990.
[12] K. Joe, Y. Mori, and S. Miyake, “Simulation of a Large-Scale Neural Network on a Parallel Computer,” Proc. 1989 Conf. Hypercubes, Concurrent Computation Application, pp. 1111-1118, 1989.
[13] D. Naylor and S. Jones, “A Performance Model for Multilayer Neural Networks in Linear Arrays,” IEEE Trans. Parallel and Distributed Systems, vol. 5, no. 12, pp. 1322-1328, Dec. 1994.
[14] A. El-Amawy and P. Kulasinghe, “Algorithmic Mapping of Feedforward Neural Networks onto Multiple Bus Systems,” IEEE Trans. Parallel and Distributed Systems, vol. 8, no. 2, pp. 130-136, Feb. 1997.
[15] T.M. Madhyastha and D.A. Reed, “Learning to Classify Parallel Input/Output Access Patterns,” IEEE Trans. Parallel and Distributed Systems, vol. 13, no. 8, pp. 802-813, Aug. 2002.
[16] N. Sundararajan and P. Saratchandran, Parallel Architecture for Artificial Neural Networks. IEEE CS Press, 1998.
[17] T.-P. Hong and J.-J. Lee, “A Nearly Optimal Back-Propagation Learning Algorithm on a Bus-Based Architecture,” Parallel Processing Letters, vol. 8, no. 3, pp. 297-306, 1998.
[18] S. Mahapatra, “Mapping of Neural Network Models onto Systolic Arrays,” J. Parallel and Distributed Computing, vol. 60, no. 6, pp. 667-689, 2000.
[19] V. Kumar, S. Shekhar, and M.B. Amin, “A Scalable Parallel Formulation of the Back-Propagation Algorithm for Hypercubes and Related Architectures,” IEEE Trans. Parallel and Distributed Systems, vol. 5, no. 10, pp. 1073-1090, Oct. 1994.
[20] S.Y. Kung and J.N. Hwang, “A Unified Systolic Architecture for Artificial Neural Networks,” J. Parallel and Distributed Computing, vol. 6, pp. 357-387, 1989.
[21] W.M. Lin, V.K. Prasanna, and K.W. Przytula, “Algorithmic Mapping of Neural Network Models onto Parallel SIMD Machines,” IEEE Trans. Computers, vol. 40, no. 12, pp. 1390-1401, Dec. 1991.
[22] J. Ghosh and K. Hwang, “Mapping Neural Networks onto Message Passing Multicomputers,” J. Parallel and Distributed Computing, Apr. 1989.
[23] Y. Fujimoto, N. Fukuda, and T. Akabane, “Massively Parallel Architecture for Large Scale Neural Network Simulation,” IEEE Trans. Neural Networks, vol. 3, no. 6, pp. 876-887, 1992.
[24] V. Sundaram, “PVM: A Framework for Parallel and Distributed Computing,” Concurrency, Practice, Experience, vol. 12, pp. 315-319, 1990.
[25] S.Y. Kung, Digital Neural Networks. Englewood Cliffs, N.J.: Prentice Hall, 1993.
[26] V. Sudhakar, C. Siva, and R. Murthy, “Efficient Mapping of Back-Propagation Algorithm onto a Network of Workstations,” IEEE Trans. Man, Machine, and Cybernetics— Part B: Cybernetics, vol. 28, no. 6, pp. 841-848, 1998.
[27] D.S. Newhall and J.C. Horvath, “Analysis of Text Using a Neural Network: A Hypercube Implementation,” Proc. Conf. Hypercubes, Concurrent Computers, Applications, pp. 1119-1122, 1989.
[28] L.C. Chu and B.W. Wah, “Optimal Mapping of Neural Network Learning on Message-Passing Multicomputers,” J. Parallel and Distributed Computing, vol. 14, pp. 319-339, 1992.
[29] T. Leighton, Introduction to Parallel Algorithms and Architectures. Morgan Kaufmann Publishers, 1992.
[30] X. Zhang and M. McKenna, “The Back-Propagation Algorithm on Grid and Hypercube Architecture,” Technical Report RL90-9, Thinking Machines Corp., 1990.
[31] S.K. Foo, P. Saratchandran, and N. Sundararajan, “Application of Genetic Algorithm for Parallel Implementation of Backpropagation Neural Networks,” Proc. Int'l Symp. Intelligent Robotic Systems, pp. 76-79, 1995.
[32] S. Haykins, Neural Networks— A Comprehensive Foundation. Prentice Hall Int'l, 1999.
[33] V. Bharadwaj, D. Ghose, V. Mani, and T.G. Robertazzi, Scheduling Divisible Loads in Parallel and Distributed Systems, IEEE CS Press, 1996.
[34] , 2004.
[35] R. Pasquini and V. Rego, “Optimistic Parallel Simulation over a Network of Workstations,” Simulation Conf. Proc., Winter, vol. 2, pp. 5-8, 1999.

Index Terms:
Multilayer perceptron, back-propagation, network of workstation, optical character recognition, performance measures, divisible load theory.
S. Suresh, S.N. Omkar, V. Mani, "Parallel Implementation of Back-Propagation Algorithm in Networks of Workstations," IEEE Transactions on Parallel and Distributed Systems, vol. 16, no. 1, pp. 24-34, Jan. 2005, doi:10.1109/TPDS.2005.11
Usage of this product signifies your acceptance of the Terms of Use.