
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Lutz Prechelt, "Exploiting DomainSpecific Properties: Compiling Parallel Dynamic Neural Network Algorithms into Efficient Code," IEEE Transactions on Parallel and Distributed Systems, vol. 10, no. 11, pp. 11051117, November, 1999.  
BibTex  x  
@article{ 10.1109/71.809571, author = {Lutz Prechelt}, title = {Exploiting DomainSpecific Properties: Compiling Parallel Dynamic Neural Network Algorithms into Efficient Code}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {10}, number = {11}, issn = {10459219}, year = {1999}, pages = {11051117}, doi = {http://doi.ieeecomputersociety.org/10.1109/71.809571}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Parallel and Distributed Systems TI  Exploiting DomainSpecific Properties: Compiling Parallel Dynamic Neural Network Algorithms into Efficient Code IS  11 SN  10459219 SP1105 EP1117 EPD  11051117 A1  Lutz Prechelt, PY  1999 KW  Compiler optimizations KW  highlevel parallel language KW  irregular problems KW  dynamic data structures KW  communication optimization. VL  10 JA  IEEE Transactions on Parallel and Distributed Systems ER   
Abstract—Domainspecific constraints can be exploited to implement compiler optimizations that are not otherwise feasible. Compilers for neural network learning algorithms can achieve nearoptimal colocality of data and processes
[1] Neurocomputing: Foundations of Research, J.A. Anderson and E. Rosenfeld, eds. Cambridge, Mass.: MIT Press, 1988.
[2] G.E. Blelloch, S. Chatterjee, J. Hardwick, J. Sipelstein, and M. Zagha, “Implementation of a Portable Nested DataParallel Language,” Proc. Fourth ACM SIGPLAN Symp. Principles and Practice of Parallel Programming, 1993.
[3] S. Chatterjee, J.R. Gilbert, R. Schreiber, and S.H. Teng, "Automatic Array Alignment in DataParallel Programs," Proc. ACM SIGACT/ SIGPLAN Symp. Principles of Programming Languages,Charleston, S.C., Jan. 1993.
[4] W. Finnoff, F. Hergert, and H.G. Zimmermann, “Improving Model Selection by Nonconvergent Methods,” Neural Networks, vol. 6, pp. 771–783, 1993.
[5] S.F. Hummel, E. Schonberg, and L.E. Flynn, “Factoring: A Method for Scheduling Parallel Loops,” Comm. ACM, vol. 35, no. 8, pp. 90101, Aug. 1992.
[6] B. Gomes, “A Framework for Mapping Connectionist Networks onto Parallel Machines,” PhD thesis, Electrical Engineering and Computer Science Dept., Univ. of California, Berkeley, May 1997.
[7] K.A. Grajski, “Neurocomputing Using the MasPar MP1,” Technical Report 90010, MasPar Computers, Sunnyvale, Calif., 1990.
[8] R.W. Gray, V.P. Heuring, S.P. Levi, A.M. Sloane, and W.M. Waite, “Eli: A Complete, Flexible Compiler Construction System,” Comm. ACM, vol. 35, no. 2, pp. 121–131, Feb. 1992.
[9] D. Hammerstrom, The CNAPS Architecture, Adaptive Solutions, Beaverton, Ore., Jan. 1993.
[10] B. Hendrickson and R. Leland, The Chaco User's Guide, version 1.0. UC405 SAND932339, Sandia National Laboratories, Albuquerque, N.M., Oct. 1993.
[11] H. Hopp and L. Prechelt, “CuPit2: A Portable Parallel Programming Language for Artificial Neural Networks,” Proc. 15th IMACS World Congress Scientific Computation, Modelling, and Applied Math., A. Sydow, ed., vol. 6, pp. 493–498, Berlin: Wissenschaft and Technik Verlag, Aug. 1997.
[12] C. Jacob and P. Wilke, “A Distributed Network Simulation Environment for MultiProcessing Systems,” Proc. Int'l Joint Conf. Neural Networks (IJCNN), pp. 1,178–1,183, Singapore, 1991.
[13] G. Kock and T. Becher, “Mind: An Environment for the Development, Integration, and Acceleration of Connectionist Systems,” Proc. 15th IMACS World Congress Scientific Computation, Modelling, and Applied Math., pp. 499–504, 1997.
[14] G. Kock and N.B. Serbedzija, “Artificial Neural Networks: From Compact Descriptions to C++,” Proc. Int'l Conf. Artificial Neural Networks, 1994.
[15] D. Koll, M. Riedmiller, and H. Braun, “Massively Parallel Training of Multi Layer Perceptrons with Irregular Topologies,” Proc. Int'l Conf. Artificial Neural Networks and Genetic Algorithms (ICANNGA), Ales, France, Springer Verlag, 1995.
[16] X. Liu and G.L. Wilcox, “Benchmarking of the CM5 and the Cray Machines with a Very Large Backpropagation Neural Network, Technical Report 93/38, Univ. of Minnesota Supercomputer Inst., Apr. 1993.
[17] MPL Language Reference Manual. Sunnyvale, Calif.: MasPar Computers, 1990.
[18] W. McCulloch and W. Pitts, “A Logical Calculus of Ideas Immanent in Nervous Activity,” Bull. Math. Biophysics, vol. 5, pp. 115–133, 1943.
[19] M. Misra, “Parallel Environments for Implementing Neural Networks,” Neural Computing Surveys vol. 1, pp. 48–60, 1997.
[20] S. Müller and B. Gomes, “A Performance Analysis of CNS1 on Sparse Connectionist Networks,” Technical Report TR94009, Int'l Computer Science Inst., Berkeley, Calif., Feb. 1994.
[21] M. Philippsen, “Automatic Alignment of Array Data and Processes to Reduce Communication Time on DMPPs,” Proc. Fifth ACM SIGPLAN Symp. Principles and Practice of Parallel Programming (PPoPP), pp. 156165, July 1995.
[22] M. Philippsen, E.A. Heinz, and P. Lukowicz, “Compiling MachineIndependent Parallel Programs,” ACM SIGPLAN Notices, vol. 28, no. 8 pp. 99–108, Aug. 1993.
[23] L. Prechelt, “CuPit—A Parallel Language for Neural Algorithms: Language Reference and Tutorial,” Technical Report 4/94, Fakultät für Informatik, Univ., Karlsruhe, Germany, Jan. 1994, ftp://ftp.ira.uka.de/pub/papers/techreports/ 1994 199404.ps.gz.
[24] L. Prechelt, “PROBEN1—A Set of Benchmarks and Benchmarking Rules for Neural Network Training Algorithms,” Technical Report 21/94, Fakultät für Informatik, Univ. Karlsruhe, Germany, Sept. 1994, ftp://ftp.ira.uka.de/pub/papers/techreports/ 1994199421.ps.gz.
[25] L. Prechelt, “The CuPitCompiler for the MasPar—A Literate Programming Document,” Technical Report 1/95, Fakultät für Informatik, Univ. Karlsruhe, Germany, Jan. 1995, ftp://ftp.ira.uka.de/pub/papers/techreports/ 1995199501.ps.gz.
[26] L. Prechelt, “A Parallel Programming Model for Irregular Dynamic Neural Networks,” W.K. Giloi, S. Jähnichen, and B.D. Shriver, eds., Proc. Programming Models for Massively Parallel Computers, Berlin, Oct. 1995. GMD First, IEEE CS Press. By accident, the article wasnotprinted in the proceedings volume, but see.
[27] U. Ramacher, W. Raab, J. Anlauf, U. Hachmann, J. Beichter, N. Brüls, M. Weßeling, E. Sicheneder, J. Gläß, A. Wurz, and R. Männer, “Synapse1: A HighSpeed General Purpose Parallel Neurocomputer System,” Proc. Ninth Int'l Symp. Parallel Processing (IPPS '95), pp. 774–781, IEEE/CS Press, Los Alamitos, Calif., Apr. 1995.
[28] H. Braun and M. Riedmiller, "Direct Adaptive Method for Faster Backpropagation Learning: The RPropAlgorithm," Proc. IEEE Int'l Conf. Neural Networks (ICNN '93), IEEE, Piscataway, N.J., 1993, pp. 586591
[29] “Application in Modelling and Simulation,” Proc. 15th IMACS World Congress on Scientific Computation, Modelling, and Applied Math., vol. 6.A. Sydow, ed., Berlin: Wissenschaft und Technik Verlag, Aug. 1997.
[30] T. Tollenaere and G.A. Orban, “Decomposition and Mapping of Locally Connected Layered Neural Networks on MessagePassing Multiprocessors,” Parallel Algorithms and Applications, vol. 1, pp. 43–56, 1993.
[31] X. Zhang, M. McKenna, J.P. Mesirov, and D. Waltz, “An Efficient Implementation of the Backpropagation Algorithm on the Connection Machine CM2,” Advances in Neural Information Processing Systems 2, D. Touretzky, ed., pp. 801–809. San Mateo, Calif.: Morgan Kaufmann, 1989.