
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
S. Suresh, S.N. Omkar, V. Mani, "Parallel Implementation of BackPropagation Algorithm in Networks of Workstations," IEEE Transactions on Parallel and Distributed Systems, vol. 16, no. 1, pp. 2434, January, 2005.  
BibTex  x  
@article{ 10.1109/TPDS.2005.11, author = {S. Suresh and S.N. Omkar and V. Mani}, title = {Parallel Implementation of BackPropagation Algorithm in Networks of Workstations}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {16}, number = {1}, issn = {10459219}, year = {2005}, pages = {2434}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPDS.2005.11}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Parallel and Distributed Systems TI  Parallel Implementation of BackPropagation Algorithm in Networks of Workstations IS  1 SN  10459219 SP24 EP34 EPD  2434 A1  S. Suresh, A1  S.N. Omkar, A1  V. Mani, PY  2005 KW  Multilayer perceptron KW  backpropagation KW  network of workstation KW  optical character recognition KW  performance measures KW  divisible load theory. VL  16 JA  IEEE Transactions on Parallel and Distributed Systems ER   
Abstract—This paper presents an efficient mapping scheme for the multilayer perceptron (MLP) network trained using backpropagation (BP) algorithm on network of workstations (NOWs). Hybrid partitioning (HP) scheme is used to partition the network and each partition is mapped on to processors in NOWs. We derive the processing time and memory space required to implement the parallel BP algorithm in NOWs. The performance parameters like speedup and space reduction factor are evaluated for the HP scheme and it is compared with earlier work involving vertical partitioning (VP) scheme for mapping the MLP on NOWs. The performance of the HP scheme is evaluated by solving optical character recognition (OCR) problem in a network of ALPHA machines. The analytical and experimental performance shows that the proposed parallel algorithm has better speedup, less communication time, and better space reduction factor than the earlier algorithm. This paper also presents a simple and efficient static mapping scheme on heterogeneous system. Using divisible load scheduling theory, a closedform expression for number of neurons assigned to each processor in the NOW is obtained. Analytical and experimental results for static mapping problem on NOWs are also presented.
[1] Y.L. Murphey and Y. Luo, “Feature Extraction for a Multiple Pattern Classification Neural Network System,” Pattern Recognition Proc., vol. 2, pp. 220223, 2002.
[2] M. Nikoonahad and D.C. Liu, “Medical Ultra Sound Imaging Using Neural Networks,” Electronic Letters, vol. 2, no. 6, pp. 1823, 1990.
[3] “Explorations in the Micro Structure of the Cognition,” Parallel and Distributed Processing, D.E. Rumelhart and J.L. McClelland, eds. Cambridge, Mass.: MIT Press, 1986.
[4] T.J. Sejnowski and C.R. Rosenberg, “Parallel Networks that Learn to Pronounce English Text,” Complex Systems, vol. 1, pp. 145168, 1987.
[5] K.S. Narendra and K. Parthasarathy, “Identification and Control of Dynamical Systems Using Neural Network,” IEEE Trans. Neural Networks, vol. 1, pp. 427, 1990.
[6] H. Yoon, J.H. Nang, and S.R. Maeng, “Parallel Simulation of Multilayered Neural Networks on DistributedMemory Multiprocessors,” Microprocessing and Microprogramming, vol. 29, pp. 185195, 1990.
[7] E. Deprit, “Implementing Recurrent BackPropagation on the Connection Machines,” Neural Network, vol. 2, pp. 295314, 1989.
[8] D.A. Pomerleau et al., “Neural Network Simulation at Warp Speed: How We Got 17 Million Connection Per Second,” Proc. IEEE Second Int'l Conf. Neural Networks II, vol. 3, pp. 119137, 1989.
[9] J. Hicklin and H. Demuth, “Modeling Neural Networks on the MPP,” Proc. Second Symp. Frontiers of Massively Parallel Computation, pp. 3942, 1988.
[10] J.A. Feldman et al., “Computing with Structured Connection Networks,” Comm. ACM, vol. 31, no. 2, pp. 170187, 1998.
[11] B.K. Mak and U. Egecloglu, “Communication Parameter Test and Parallel Backpropagation on iPSC/2 Hypercube Multiprocessor,” IEEE Frontier, pp. 13531364, 1990.
[12] K. Joe, Y. Mori, and S. Miyake, “Simulation of a LargeScale Neural Network on a Parallel Computer,” Proc. 1989 Conf. Hypercubes, Concurrent Computation Application, pp. 11111118, 1989.
[13] D. Naylor and S. Jones, “A Performance Model for Multilayer Neural Networks in Linear Arrays,” IEEE Trans. Parallel and Distributed Systems, vol. 5, no. 12, pp. 13221328, Dec. 1994.
[14] A. ElAmawy and P. Kulasinghe, “Algorithmic Mapping of Feedforward Neural Networks onto Multiple Bus Systems,” IEEE Trans. Parallel and Distributed Systems, vol. 8, no. 2, pp. 130136, Feb. 1997.
[15] T.M. Madhyastha and D.A. Reed, “Learning to Classify Parallel Input/Output Access Patterns,” IEEE Trans. Parallel and Distributed Systems, vol. 13, no. 8, pp. 802813, Aug. 2002.
[16] N. Sundararajan and P. Saratchandran, Parallel Architecture for Artificial Neural Networks. IEEE CS Press, 1998.
[17] T.P. Hong and J.J. Lee, “A Nearly Optimal BackPropagation Learning Algorithm on a BusBased Architecture,” Parallel Processing Letters, vol. 8, no. 3, pp. 297306, 1998.
[18] S. Mahapatra, “Mapping of Neural Network Models onto Systolic Arrays,” J. Parallel and Distributed Computing, vol. 60, no. 6, pp. 667689, 2000.
[19] V. Kumar, S. Shekhar, and M.B. Amin, “A Scalable Parallel Formulation of the BackPropagation Algorithm for Hypercubes and Related Architectures,” IEEE Trans. Parallel and Distributed Systems, vol. 5, no. 10, pp. 10731090, Oct. 1994.
[20] S.Y. Kung and J.N. Hwang, “A Unified Systolic Architecture for Artificial Neural Networks,” J. Parallel and Distributed Computing, vol. 6, pp. 357387, 1989.
[21] W.M. Lin, V.K. Prasanna, and K.W. Przytula, “Algorithmic Mapping of Neural Network Models onto Parallel SIMD Machines,” IEEE Trans. Computers, vol. 40, no. 12, pp. 13901401, Dec. 1991.
[22] J. Ghosh and K. Hwang, “Mapping Neural Networks onto Message Passing Multicomputers,” J. Parallel and Distributed Computing, Apr. 1989.
[23] Y. Fujimoto, N. Fukuda, and T. Akabane, “Massively Parallel Architecture for Large Scale Neural Network Simulation,” IEEE Trans. Neural Networks, vol. 3, no. 6, pp. 876887, 1992.
[24] V. Sundaram, “PVM: A Framework for Parallel and Distributed Computing,” Concurrency, Practice, Experience, vol. 12, pp. 315319, 1990.
[25] S.Y. Kung, Digital Neural Networks. Englewood Cliffs, N.J.: Prentice Hall, 1993.
[26] V. Sudhakar, C. Siva, and R. Murthy, “Efficient Mapping of BackPropagation Algorithm onto a Network of Workstations,” IEEE Trans. Man, Machine, and Cybernetics— Part B: Cybernetics, vol. 28, no. 6, pp. 841848, 1998.
[27] D.S. Newhall and J.C. Horvath, “Analysis of Text Using a Neural Network: A Hypercube Implementation,” Proc. Conf. Hypercubes, Concurrent Computers, Applications, pp. 11191122, 1989.
[28] L.C. Chu and B.W. Wah, “Optimal Mapping of Neural Network Learning on MessagePassing Multicomputers,” J. Parallel and Distributed Computing, vol. 14, pp. 319339, 1992.
[29] T. Leighton, Introduction to Parallel Algorithms and Architectures. Morgan Kaufmann Publishers, 1992.
[30] X. Zhang and M. McKenna, “The BackPropagation Algorithm on Grid and Hypercube Architecture,” Technical Report RL909, Thinking Machines Corp., 1990.
[31] S.K. Foo, P. Saratchandran, and N. Sundararajan, “Application of Genetic Algorithm for Parallel Implementation of Backpropagation Neural Networks,” Proc. Int'l Symp. Intelligent Robotic Systems, pp. 7679, 1995.
[32] S. Haykins, Neural Networks— A Comprehensive Foundation. Prentice Hall Int'l, 1999.
[33] V. Bharadwaj, D. Ghose, V. Mani, and T.G. Robertazzi, Scheduling Divisible Loads in Parallel and Distributed Systems, IEEE CS Press, 1996.
[34] http://www.ee.sunysb.edu/tomdlt.html#THEORY , 2004.
[35] R. Pasquini and V. Rego, “Optimistic Parallel Simulation over a Network of Workstations,” Simulation Conf. Proc., Winter, vol. 2, pp. 58, 1999.