The Community for Technology Leaders
RSS Icon
Issue No.06 - Nov.-Dec. (2013 vol.15)
pp: 27-35
William Gropp , Univ. of Illinois at Urbana-Champaign, Urbana, IL, USA
Marc Snir , Argonne Nat. Lab., Argonne, IL, USA
Exascale systems will present programmers with many challenges. The authors review the parallel programming models that are appropriate for such systems and the challenges that implementations of those models face in an exascale system. They also discuss the feasibility of using existing programming systems, thus preserving the investment in legacy applications, as well as the benefits and likelihood of new programming models and systems.
Programming, Message systems, Computational modeling, Synchronization, Electronics packaging, Object oriented modeling, Data models, Computer architecture,simulation languages, Programming, Message systems, Computational modeling, Synchronization, Electronics packaging, Object oriented modeling, Data models, Computer architecture, scientific computing, concurrent, distributed and parallel languages
William Gropp, Marc Snir, "Programming for Exascale Computers", Computing in Science & Engineering, vol.15, no. 6, pp. 27-35, Nov.-Dec. 2013, doi:10.1109/MCSE.2013.96
1. M.C. Rinard, D.J. Scales, and M.S. Lam, “Jade: A High-Level, Machine-Independent Language for Parallel Programming,” Computer, vol. 26, no. 6, 1993, pp. 28-38.
2. MPI: A Message-Passing Interface Standard Version 3.0, Message Passing Interface Forum, 2012; .
3. J. Reinders, Intel Threading Building Blocks: Outfitting C++ for Multicore Processor Parallelism, O'Reilly Media, 2007.
4. OpenMP Application Program Interface Version 4.0, OpenMP Architecture Rev. Board, 2013; OpenMP4.0.0.pdf .
5. L. Kale and S. Krishnan, “CHARM++: A Portable Concurrent Object Oriented System Based on C++,” Proc. ACM SIGPLAN Int'l Conf. Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA'93), A. Paepcke, ed., ACM, 1993, pp. 91-108.
6. A., Alexandrov et al., “LogGP: Incorporating Long Messages into the LogP Model—One Step Closer Towards a Realistic Model for Parallel Computation,” Proc. 7th Ann. ACM Symp. Parallel Algorithms and Architectures, ACM, 1995, pp. 95-105.
7. L.G. Valiant, “A Bridging Model for Parallel Computation,” Comm. ACM, vol. 33, no. 8, 1990, pp. 103-111.
8. E.D. Brooks III, B.C. Gorda, and K.H. Warren, “The Parallel C Preprocessor,” Scientific Programming, vol. 1, no. 1, 1992, pp. 79-89.
9. C.E. Leiserson, “The Cilk++ Concurrency Platform,” J. Supercomputing, vol. 51, no. 3, 2010, pp. 244-257.
10. D. Buttlar, J. Farrell, and B. Nichols, PThreads Programming: A POSIX Standard for Better Multiprocessing, O'Reilly Media, 1996.
11. J. Hill et al., “BSPlib: The BSP Programming Library,” Parallel Computing, vol. 24, no. 14, 1998, pp. 1947-1980.
12. V. Saraswat et al., X10 Language Specification, version 2.3, IBM, 2013; languagespecx10-latest.pdf.
13. K. Feind, “Shared Memory Access (SHMEM) Routines,” Proc. Cray User Group Spring Meeting, Cray User Group, 1995, pp. 203-208.
14. W.W. Carlson et al., Introduction to UPC and Language Specification, Center for Computing Sciences, Inst. for Defense Analyses, 1999.
15. J. Reid, “The New Features of Fortran 2008,” ACM SIGPLAN Fortran Forum, vol. 27, no. 2, 2008, pp. 8-21.
16. J. Nieplocha, R.J. Harrison, and R.J. Littlefield, “Global Arrays: A Nonuniform Memory Access Programming Model for High-Performance Computers,” J. Supercomputing, vol. 10, no. 2, 1996, pp. 169-189.
17. Cray, Chapel Language Specification, Version 0.92, Cray, 2012.
18. S. Balay et al., “PETSc Users Manual, Revision 3.3,” ANL, 2012; .
19. Z. DeVito et al., “Liszt: A Domain Specific Language for Building Portable Mesh-Based PDE Solvers,” Proc. 2011 Int'l Conf. High Performance Computing, Networking, Storage and Analysis (SC11), ACM, 2011, article no. 9.
20. A.A. Auer et al., “Automatic Code Generation for Many-Body Electronic Structure Methods: The Tensor Contraction Engine,” Molecular Physics, vol. 104, no. 2, 2006, pp. 211-228.
21. P. Balaji et al., “MPI on Millions of Cores,” Parallel Processing Letters, vol. 21, no. 1, 2011, pp. 45-60.
22. D. Goodell et al., “Scalable Memory Use in MPI: A Case Study with MPICH2,” Recent Advances in the Message Passing Interface Proc. 18th European MPI Users' Group Meeting (EuroMPI 2011), LNCS 6960, Y. Cotronis et al., eds., Springer, 2011, pp. 140-149.
23. P. Balaji et al., “PMI: A Scalable Parallel Process-Management Interface for Extreme-Scale Systems,” Recent Advances in the Message Passing Interface Proc. 17th European MPI Users' Group Meeting, LNCS 6305, R. Keller et al., eds., Springer, 2010, pp. 31-41.
24. C. Terboven et al., “First Experiences with Intel Cluster OpenMP,” OpenMP in a New Era of Parallelism, Springer, 2008, pp. 48-59.
25. H. Pan, B. Hindman, and K. Asanovic, “Composing Parallel Software Efficiently with Lithe,” Proc. 2010 ACM SIGPLAN Conf. Programming Language Design and Implementation, ACM, 2010, pp. 376-387.
26. J. Zhang, B. Behzad, and M. Snir, “Optimizing the Barnes-Hut Algorithm in UPC,” Proc. 2011 ACM/IEEE Int'l Conf. High Performance Computing, Networking, Storage and Analysis (SC11), ACM, 2011, article no. 75.
27. J. Mellor-Crummey et al., “A New Vision for Coarray Fortran,” Proc. 3rd Conf. Partitioned Global Address Space Programing Models, ACM, 2009, p. 5.
28. T. von Eicken et al., “Active Messages: A Mechanism for Integrated Communication and Computation,” Proc. 19th Ann. Int'l Symp. Computer Architecture, ACM, 1992, pp. 256-266.
29. J. Zhang, B. Behzad, and M. Snir, “Design of a Multithreaded BarnesHut Algorithm for Multicore Clusters,” tech. report ANL/MCS-P4055-0313, MCS, Argonne Na'tl Laboratory, 2013.
30. “Dynamic Exascale Global Address Space or DEGAS,”12 Feb. 2013;
31. G. Bikshandi et al., “Programming for Parallelism and Locality with Hierarchically Tiled Arrays,” Proc. 11th ACM SIGPLAN Symp. Principles and Practice of Parallel Programming, ACM, 2006, pp. 48-57.
32. A. Van Deursen, P. Klint, and J. Visser, “Domain-Specific Languages: An Annotated Bibliography,” ACM Sigplan Notices, vol. 35, no. 6, 2000, pp. 26-36.
33. T. Ruppelt and G. Wirtz, “Automatic Transformation of High-Level Object-Oriented Specifications into Parallel Programs,” Parallel Computing, vol. 10, no. 1, 1989, pp. 15-28.
34. E.N. Houstis et al., “PELLPACK: A Problem-Solving Environment for PDE-Based Applications on Multicomputer Platforms,” ACM Trans. Mathematical Software, vol. 24, no. 1, 1998, pp. 30-73.
35. S. Husa, I. Hinder, and C. Lechner, “Kranc: A Mathematica Package to Generate Numerical Codes for Tensorial Evolution Equations,” Computer Physics Comm., vol. 174, no. 12, 2006, pp. 983-1004.
36. D. Quinlan, “ROSE: Compiler Support for Object-Oriented Frameworks,” Parallel Processing Letters, vol. 10, nos. 2–3, 2000, pp. 215-226.
37. A. Hartono, B. Norris, and P. Sadayappan, “Annotation-Based Empirical Performance Tuning Using Orio,” IEEE Int'l Symp. Parallel & Distributed Processing, IEEE, 2009, pp. 1-11.
38. D.J. Quinlan et al., “Treating a User-Defined Parallel Library as a Domain-Specific Language,” Proc. 16th Int'l Parallel and Distributed Processing Symp., IEEE CS, 2002, pp. 105-114.
39. D. Batory, B. Lofaso, and Y. Smaragdakis, “JTS: Tools for Implementing Domain-Specific Languages,” Proc. 5th Int'l Conf. Software Reuse, IEEE, 1998, pp. 143-153.
40. J.J. Willcock, A. Lumsdaine, and D.J. Quinlan, “Reusable, Generic Program Analyses and Transformations,” ACM Sigplan Notices, vol. 45, ACM, 2009, pp. 5-14.
41. O. Sarood, E. Meneses, and L.V. Kale, “A ‘Cool’ Way of Improving the Reliability of HPC Machines,” Proc. Int'l Conf. High Performance Computing, Networking, Storage and Analysis, ACM, 2013, article no. 58.
42. M. Snir et al., “Addressing Failures in Exascale Computing,” tech. report ANL/MCS-TM-332, Argonne Nat'l Laboratory, Mathematics and Computer Science Division, Apr. 2013.
140 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool