This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Compiling Communication-Efficient Programs for Massively Parallel Machines
July 1991 (vol. 2 no. 3)
pp. 361-376

A method of generating parallel target code with explicit communication for massively parallel distributed-memory machines is presented. The source programs are shared-memory parallel programs with explicit control structures. The method extracts syntactic reference patterns from a program with shared address space, selects appropriate communication routines, places these routines in appropriate locations in the target program text and sets up correct conditions for invoking these routines. An explicit communication metric is used to guide the selection of data layout strategies.

[1] M. Foxet al., Solving Problems on Concurrent Processors, vol. 1. Englewood Cliffs, NJ: Prentice-Hall, 1988.
[2] C.-T. Ho, "Optimal communication primitives and graph embeddings on hypercubes," Ph.D. dissertation, Yale Univ., May 1990.
[3] S. L. Johnsson, "Communication efficient basic linear algebra computations on hypercube architectures,"J. Parallel Distributed Comput., pp. 133-172, 1987.
[4] L. G. Valiant, "A scheme for fast parallel communication,"SIAM J. Comput., vol. 11, pp. 350-361, May 1982.
[5] D. Callahan and K. Kennedy, "Compiling programs for distributedmemory multiprocessors,"J. Supercomput., vol. 2, no. 2, pp. 151-170, 1988.
[6] H. P. Zima, H. J. Bast, and M. Gerndt, "Superb: A tool for semi-automatic SIMD/MIMD parallelization,"Parallel Comput., vol. 6, pp. 1-18, 1988.
[7] C. Koelbel and P. Mehrotra, "Compiler transformations for nonshared memory machines," inProc. 4th Int. Conf. Supercomput., May 1989.
[8] C. Koelbel, P. Mehrotra, and J. Van Rosendale, "Supporting shared data structures on distributed memory architectures," inProc. 2nd ACM SIGPLAN Symp. Principles Practice of Parallel Programming, Mar. 1990, Rep. 90-7, ICASE, Jan. 1990.
[9] M. Rosing and R. B. Schnabel, "An overview of DINO--A new language for numerical computation on distributed memory multiprocessors," Tech. Rep. CU-CS-385-88, Univ. of Colorado, Mar. 1988.
[10] A. Rogers and K. Pingali, "Process decomposition through locality of reference," inProc. SIGPLAN'89 Conf. Programming Language Design and Implementation, 1989, pp. 69-80.
[11] M. J. Quinn, P. J. Hatcher, and K. C. Jourdenais, "Compiling C* programs for a hypercube multicomputer," inProc. ACM SIGPLAN Parallel Programming: Experience with Appl., Languages, Syst., 1988, pp. 57-65.
[12] A. Ramanujan and P. Sadayappan, "A methodology for parallelizing programs for complex memory multiprocessors," inProc. Supercomputing 89, Reno, NV, Nov. 1989.
[13] M. Wolfe,Optimizing Supercompilers for Supercomputers. Cambridge MA: MIT Press, 1989.
[14] J. Li and M. Chen, "Index domain alignment: Minimizing cost of cross-reference between distributed arrays," inProc. 3rd Symp. Frontiers Massively Computation, College Park, MD, Oct. 1990.
[15] D. A. Padua and M. J. Wolfe, "Advanced compiler optimizations for supercomputers,"Common. ACM, vol. 29, no. 12, pp. 1184- 1201, Dec. 1986.
[16] J. Li,Compiling Crystal for Distributed-Memory Machines, doctoral dissertation, Yale Univ., New Haven, Conn., October 1991.
[17] M. Jacquemin and J. A. Yang, "Crystal version 3.0 reference manual," Tech. Rep. 840, Yale Univ., Nov. 1990.
[18] M. Chen, Y.-i. Choo, and J. Li, "Theory and pragmatics of compiling efficient parallel code," inParallel Functional Programming, B. Szymanski, Ed., Supercomputing and Parallel Processing, K. Hwang, series Ed. New York: McGraw-Hill, 1991, ch. 7.
[19] M. Hemy, "Crystal compiler primer, version 3.0," Tech. Rep. 849, Yale Univ., Mar. 1991.
[20] M. Chen, Y.-i. Choo, and J. Li, "Compiling parallel programs by optimizing performance,"J. Supercomput., vol. 1, pp. 171-207, July 1988.

Index Terms:
Index Termscommunication-efficient programs; parallel target code; explicit communication; massively parallel distributed-memory machines; source programs; shared-memory parallel programs; explicit control structures; syntactic reference patterns; shared address space; communication routines; target program text; communication metric; data layout strategies; parallel machines; parallel programming; program compilers; scheduling; storage management
Citation:
J. Li, M. Chen, "Compiling Communication-Efficient Programs for Massively Parallel Machines," IEEE Transactions on Parallel and Distributed Systems, vol. 2, no. 3, pp. 361-376, July 1991, doi:10.1109/71.86111
Usage of this product signifies your acceptance of the Terms of Use.