
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Rafic A. Ayoubi, Magdy A. Bayoumi, "Efficient Mapping Algorithm of Multilayer Neural Network on Torus Architecture," IEEE Transactions on Parallel and Distributed Systems, vol. 14, no. 9, pp. 932943, September, 2003.  
BibTex  x  
@article{ 10.1109/TPDS.2003.1233715, author = {Rafic A. Ayoubi and Magdy A. Bayoumi}, title = {Efficient Mapping Algorithm of Multilayer Neural Network on Torus Architecture}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {14}, number = {9}, issn = {10459219}, year = {2003}, pages = {932943}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPDS.2003.1233715}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Parallel and Distributed Systems TI  Efficient Mapping Algorithm of Multilayer Neural Network on Torus Architecture IS  9 SN  10459219 SP932 EP943 EPD  932943 A1  Rafic A. Ayoubi, A1  Magdy A. Bayoumi, PY  2003 KW  Algorithmic mapping KW  neural networks KW  mesh architecture KW  multilayer perceptron KW  backerror propagation. VL  14 JA  IEEE Transactions on Parallel and Distributed Systems ER   
Abstract—This paper presents a new efficient parallel implementation of neural networks on meshconnected SIMD machines. A new algorithm to implement the recall and training phases of the multilayer perceptron network with backerror propagation is devised. The developed algorithm is much faster than other known algorithms of its class and comparable in speed to more complex architecture such as hypercube, without the added cost; it requires O(1) multiplications and O(log N) additions, whereas most others require O(N) multiplications and O(N) additions. The proposed algorithm maximizes parallelism by unfolding the ANN computation to its smallest computational primitives and processes these primitives in parallel.
[1] R.A. Ayoubi and M.A. Bayoumi, An Efficient Implementation of Multilayer Percetrpon on Mesh Architecture Proc. IEEE Int'l Symp. Circuits and Systems, vol. 2, pp. 109112, 2002.
[2] G. Belloch and C. Rosenberg, Network Learning on the Connection Machine Proc. 10th Int'l Joint Conf. Artificial Intelligence, 1987.
[3] C. Bishop, Neural Networks for Pattern Recognition. New York: Oxford Univ. Press Inc., 1995.
[4] T. Blank, "The MasPar MP1 Architecture," IEEE Compcon, pp. 2024. IEEE, San Francisco, Feb./Mar. 1990.
[5] D.A. Culler and J.P. Singh, Parallel Computer Architecture: A Hardware/Software Approach. San Francisco, Calif.: Morgan Kaufmann, 1998.
[6] J. Ghosh and K. Hwang, Mapping Neural Networks onto MessagePassing Multicomputers J. Parallel and Distributed Computing, vol. 6, pp. 291330, 1989.
[7] H.M. Hastings and S. Waner, Neural Nets on the MPP Proc. Symp. Frontiers of Massively Parallel Scientific Computation, 1987.
[8] S. Haykin, Neural Networks: A Comprehensive Foundation. New York: Macmillan College, 1994.
[9] J. Hertz, A. Krogh, and R.G. Palmer, Introduction to the Theory of Neural Computation. Redwood City, Calif.: Addison Wesley, 1991.
[10] J. Hwang and S. Kung, Parallel Algorithms/Architectures for Neural Networks J. VLSI Signal Processing, 1989.
[11] Y. Izui and A. Pentland, Analysis of Neural Networks with Redundancy Neural Computation, vol. 2, pp. 226238, 1990.
[12] K. Kim and V.K.P. Kumar, Efficient Implementation of Neural Networks on Hypercube SIMD Arrays Proc. Int'l Joint Conf. Neural Networks, vol. 2, pp. 614617, 1989.
[13] Y. Kim, M.J. Noh, T.D. Han, and S.D. Kim, Mapping of Neural Networks onto the MemoryProcessor Integrated Architecture Neural Networks, vol. 11, pp. 10831098, 1998.
[14] S.Y. Kung, Parallel Architectures for Artificial Neural Nets Proc. Int'l Conf. Systolic Arrays, vol. 1, pp. 163174, 1988.
[15] W. Lin, V.K. Prasanna, and K.W. Przytula, "Algorithmic Mapping of Neural Network Models onto Parallel SIMD Machines," IEEE Trans. Computers, vol. 40, no. 12, pp. 1,3901,401, Dec. 1991.
[16] W. Lincoln and J. Skrzypek, Synergy of Clustering Multiple Back Propagation Networks Neural Information Processing Systems 2, D.S. Touretzkey, ed., pp. 650657, Morgan Kaufmann, 1990.
[17] M.J. Little and J. Grinberg, The 3D Computer: An Integrated Stack of WSI Wafers. Norwell, Mass.: Kluwer Academic, 1988.
[18] Q.M. Malluhi, M.A. Bayoumi, and T.R. Rao, An Efficient Mapping of Multilayer Perceptron with Backpropagation ANNs on Hypercubes Proc. IEEE Symp. Parallel and Distributed Processing, pp. 368375, 1993.
[19] Q.M. Malluhi,M.A. Bayoumi, and T.R.N. Rao,"Efficient Mapping of ANNs on Hypercube Massively Parallel Machines," IEEE Trans. Computers, vol. 44, no. 6, pp. 769779, June 1995.
[20] MasPar MP1 Hardware Manuals. Sunnyvale, Calif.: MasPar Computer Corp., 1992.
[21] D. Phatak and I. Koren, Complete and Partial Fault Tolerance of Feedforward Neural Nets IEEE Trans. Neural Networks, vol. 6, no. 2, pp. 446456, 1995.
[22] D.A. Pomerleau et al., "Neural Network Simulations at Warp Speed: How We Got 17 Million Connections per Second," Proc. IEEE Int'l Conf. Neural Networks, IEEE, Piscataway, N.J., July 1988, pp. 143150.
[23] J.L. Potter, The Massively Parallel Processor. Cambridge, Mass.: MIT Press, 1985.
[24] U. Schweigelsohn, A Shortperiodic TwoDimensional Systolic Sorting Algorithm Proc. Int'l Conf. Systolic Arrays, vol. 1, pp. 257264, 1988.
[25] S. Shams and J.L. Gaudiot, Implementing Regularly Structure Neural Networks on the DREAM Machine IEEE Trans. Neural Networks, vol. 6, no. 2, pp. 407421, 1995.
[26] S. Shams and W. Przytula, Mapping of Neural Networks onto Programmable Parallel Machines Proc. IEEE Int'l Symp. Circuits and Systems, pp. 26132617, 1990.
[27] Connection Machine Model CM2 Technical Summary Thinking Machine Corp., Thinking Machine Technical Report HA874, 1987.
[28] S. Tomboulian, Introduction to a System for Implementing Neural Net Connections on SIMD Architectures Proc. Neural Information Processing Systems, pp. 804813, 1987.