This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
ZPL: A Machine Independent Programming Language for Parallel Computers
March 2000 (vol. 26 no. 3)
pp. 197-211

Abstract—The goal of producing architecture-independent parallel programs is complicated by the competing need for high performance. The ZPL programming language achieves both goals by building upon an abstract parallel machine and by providing programming constructs that allow the programmer to “see” this underlying machine. This paper describes ZPL and provides a comprehensive evaluation of the language with respect to its goals of performance, portability, and programming convenience. In particular, we describe ZPL's machine-independent performance model, describe the programming benefits of ZPL's region-based constructs, summarize the compilation benefits of the language's high-level semantics, and summarize empirical evidence that ZPL has achieved both high performance and portability on diverse machines such as the IBM SP-2, Cray T3E, and SGI Power Challenge.

[1] D. Abramson, I. Foster, J. Michalakes, and R. Socic, “Relative Debugging: A New Methodology for Debugging Scientific Applications,” Comm. ACM, vol. 39, no. 11, pp. 69–77, Nov. 1996.
[2] J.C. Adams, W.S. Brainerd, J.T. Martin, B.T. Smith, and J.L. Wagener, Fortran 90 Handbook, McGraw Hill, 1992.
[3] V.S. Adve et al., "An Integrated Compilation and Performance Analysis Environment for Data Parallel Programs," Proc. Supercomputing '95, ACM Press, New York, 1995.
[4] G. Alverson, W. Griswold, C. Lin, D. Notkin, and L. Snyder., “Abstractions for Portable, Scalable Parallel Programming,” Trans. Parallel and Distributed Systems, vol. 9, no. 1 pp. 1–17, Jan. 1998.
[5] C.R. Anderson, “An Implementation of the Fast Multipole Method without Multipoles,” J. Science of Statistic and Computing SIAM, vol. 13, no. 4, pp. 923–947, July 1992.
[6] R.J. Anderson and L. Snyder, “A Comparison of Shared and Nonshared Memory Models of Parallel Computation,” Proc. IEEE, vol. 79, no. 4, pp. 480–487, 1991.
[7] D.H. Ballard and C.M. Brown, Computer Vision, Prentice Hall, Upper Saddle River, N.J., 1982.
[8] A. Beguelin and J. Dongarra., PVM: Parallel Virtual MachineA Users' Guide and Tutorial for Networked Parallel Computing, MIT Press, 1994.
[9] G.E. Blelloch,Vector Models for Data-Parallel Computing. The MIT Press, 1990.
[10] G.E. Blelloch, "NESL: A Nested Data-Parallel Language," Technical Report CMU-CS-92-103, School of Computer Science, Carnegie Mellon Univ., Jan. 1992.
[11] G.E. Blelloch, “Programming Parallel Algorithms,” Comm. ACM, vol. 39, no. 3, pp. 85-97, Mar. 1996.
[12] B. Chamberlain et al., "Factor-Join: A Unique Approach to Compiling Array Languages for Parallel Machines," Languages and Compilers for Parallel Computing, D. Sehr, eds., Springer-Verlag, Berlin, 1996, pp. 481-500.
[13] B. Chamberlain, S.-E. Choi, E.-C. Lewis, C. Lin, L. Snyder, and W.D. Weathersby, “The Case for High Level Parallel Programming in ZPL,” IEEE Computational Science and Eng., pp. 76–86, vol. 5, no. 3, July-Sep. 1998.
[14] B.L. Chamberlain, S.-E. Choi, E.-C. Lewis, C. Lin, L. Snyder, and W.D. Weathersby, “ZPL's WYSIWYG Performance Model,” Third Int'l Workshop High-Level Parallel Programming Models and Supportive Environments, pp. 50–61, Mar. 1998.
[15] B.L. Chamberlain, S. Choi, and L. Snyder, "A Compiler Abstraction for Machine-Independent Communication Generation," Workshop on Languages and Compilers for Parallel Computing, Springer-Verlag, Berlin, 1997.
[16] B.L. Chamberlain, E.-C. Lewis, C. Lin, and L. Snyder, “Regions: An Abstraction for Expressing Array Computation,” Proc. ACM SIGAPL/SIGPLAN Int'l Conf. Array Programming Languages, pp. 9–41, Aug. 1999.
[17] B.L. Chamberlain, E.-C. Lewis, and L. Snyder, “A Region-Based Approach to Sparse Parallel Computation,” Technical Report UW-CSE-98-11-01, Dept. of Computer Science and Eng., Univ. of Washington, Nov. 1998.
[18] B.L. Chamberlain, E.-C. Lewis, and L. Snyder, “Language Support for Pipelining Wavefront Computations,” Proc. Workshop Languages and Compilers for Parallel Computing, 1999.
[19] S.-E. Choi and L. Snyder, “Quantifying the Effect of Communication Optimizations,” Proc. Int'l Conf. Parallel Processing, pp. 218–222, Aug. 1997.
[20] M.D. Dikaiakos et al., "The Portable Parallel Implementation of Two Novel Mathematical Biology Algorithms in ZPL," Ninth Int'l Conf. Supercomputing, ACM Press, 1995, pp. 365-74.
[21] K. Ekanadham and Arvind, “SIMPLE: Part I, An Exercise in Future Scientific Programming,” Technical Report 273, MIT CSG, 1987.
[22] S. Fortune and J. Wyllie, "Parallelism in Random Access Machines," Proc. 10th Ann Symp. Theory of Computing, pp. 114-118, 1978.
[23] Message Passing Interface Forum, “MPI: A Message Passing Interface Standard,” Int'l J. Supercomputing Applications, vol. 8,nos. 3 and 4, pp. 169–416, 1994.
[24] MPI Forum, “MPI Standard 2.0,” technical report,http://www.mcs.anl.govmpi/, Oct. 1997.
[25] D. Hanselman and B. Littlefield, Mastering MATLAB. Prentice Hall 1996.
[26] “High Performance Fortran Forum,” High Performance Fortran Language Specification, Version 1.1, Nov. 1994.
[27] E. Johnson, D. Gannon, and P. Beckman, “HPC++: Experiments with the Parallel Standard Template Library,” Proc. Int'l Conf. Supercomputing, 1997.
[28] S.R. Kohn and S.B. Baden, “A Robust Parallel Programming Model for Dynamic Non-Uniform Scientific Computations,” Technical Report CS94-354, Dept. of Computer Science and Eng., Univ. of Calif. at San Diego, Mar. 1994.
[29] R.E. Ladner and M.J. Fischer, "Parallel Prefix Computation," J. ACM, vol. 27, no. 4, pp. 831-838, Oct. 1980.
[30] R.J. LeVeque and D.S. Bale, “Wave Propagation Methods for Conservation Laws with Source Terms,” Proc. Seventh Int'l Conf. Hyperbolic Problems, Feb. 1998.
[31] E.-C. Lewis, C. Lin, and L. Snyder, “The Implementation and Evaluation of Fusion and Contraction in Array Languages,” Proc. SIGPLAN Conf. Programming Language Design and Implementation, pp. 50–59, June 1998.
[32] E.-C. Lewis, C. Lin, L. Snyder, and G. Turkiyyah, “A Portable Parallel N-Body Solver,” Proc. Seventh SIAM Conf. Parallel Processing for Scientific Computing, D. Bailey, P. Bjorstad, J. Gilbert, M. Mascagni, R. Schreiber, H. Simon, V. Torczon, and L. Watson, eds., pp. 331–336, 1995.
[33] C. Lin, L. Snyder, R. Anderson, B. Chamberlain, S. Choi, G. Forman, E. Lewis, and W.D. Weathersby, “ZPL vs. HPF: A Comparison of Performance and Programming Style,” Technical Report 95–11–05, Dept. of Computer Science and Engineering, Univ. of Washington, 1994.
[34] C. Lin, "The Portability of Parallel Programs Across MIMD Computers," PhD thesis, Dept. of Computer Science and Eng., Univ. of Washington, 1992.
[35] C. Lin and L. Snyder, "ZPL: An Array Sublanguage," Languages and Compilers for Parallel Computing, U. Banerjee, D. Gelernter, A. Nicolau, and D. Padua, eds., pp. 96-114. Springer-Verlag, 1993.
[36] C. Lin and L. Snyder, "Simple Performance Results in ZPL," Languages and Compilers for Parallel Computing, K. Pingali et al., eds., Springer-Verlag, Berlin, 1994, pp. 361-375.
[37] B.J. MacLennan, Principles of Programming Languages: Design, Evaluation and Implementation. Prentice Hall, Holt, Rinehart, and Winston, 1987.
[38] T.A. Ngo, The Role of Performance Models in Parallel Programming and Languages, PhD thesis, Univ. of Washington, Dept. of Computer Science and Engineering, Seattle, 1997.
[39] T. Ngo, L. Snyder, and B. Chamberlain, "Portable Performance of Data Parallel Languages," Supercomputing '97, IEEE Computer Society Press, Los Alamitos, Calif., 1997 (published only on CD-ROM, ISBN 0-89791-985-8).
[40] W. Richardson, M. Bailey, and W.H. Sanders, "Using ZPL to Develop a Parallel Chaos Router Simulator," 1996 Winter Simulation Conf., SCS Int'l., San Diego, 1996, pp. 809-816.
[41] G. Rivera and C.-W. Tseng, “Data Transformations for Eliminating Conflict Misses,” Proc. SIGPLAN Conf. Programming Language Design and Implementation, June 1998.
[42] G. Roth and K. Kennedy, “Dependence Analysis of Fortran 90 Array Syntax,” Proc. Int'l Conf. Parallel and Distributed Processing Techniques and Applications, pp. 1,225–1,235, Aug. 1996.
[43] L. Snyder, "Type Architecture, Shared Memory, and the Corollary of Modest Potential," Ann. Rev. Computer Science, Annual Reviews, Inc., Palo Alto, Calif., 1986, pp. 289-318.
[44] L. Snyder, “Foundations of Practical Parallel Programming Languages,” Proc. Second Int'l Conf. Austrian Center for Parallel Computation, pp. 115–34, 1993.
[45] L. Snyder, “Experimental Validation of Models of Parallel Computation,” Lecture Notes in Computer Science, A. Hofmann and J. van Leeuwen, eds., Special vol. 1,000 pp. 78–100, 1995.
[46] L. Snyder, A Programmer's Guide to ZPL. MIT Press, 1999.
[47] R. Sosic and D.A. Abramson, “Guard: A Relative Debugger,” Software Practice and Experience, vol. 27, no. 2, pp. 185–206, Feb. 1997.
[48] “C$*$Programming Guide, Version 6.0.2,” Thinking Machines Corp., Cambridge, Mass., June 1991.
[49] A. Wagner and C.E. Scott, “Lattice Boltzmann Simulations as a Tool to Examine Multiphase Flow Problems for Polymer Processing Applications,” Proc. Soc. of Plastics Engineers Ann. Technical Conf. (ANTEC '99), 1999.
[50] M. Wolf and M. Lam, “A Data Locality Optimizing Algorithm,” Proc. SIGPLAN Conf. Programming Language Design and Implementation, pp. 30-44, June 1991.
[51] M. Wolfe, High Performance Compilers for Parallel Computing, Addison-Wesley, 1996.
[52] R.W. Numrich and J.K. Reid, Co-Array Fortran for Parallel Programming. Ruthorford Appleton Laboratory, Oxon, UK, RAL-TR-1998-060 Aug. 1998

Index Terms:
Portable, efficient, parallel programming language.
Citation:
Bradford L. Chamberlain, Sung-Eun Choi, E. Christopher Lewis, Calvin Lin, Lawrence Snyder, W. Derrick Weathersby, "ZPL: A Machine Independent Programming Language for Parallel Computers," IEEE Transactions on Software Engineering, vol. 26, no. 3, pp. 197-211, March 2000, doi:10.1109/32.842947
Usage of this product signifies your acceptance of the Terms of Use.