The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - Nov.-Dec. (2013 vol.15)
pp: 16-26
Peter Kogge , Univ. of Notre Dame, Notre Dame, IN, USA
John Shalf , Lawrence Berkeley Nat. Lab., Berkeley, CA, USA
ABSTRACT
We now have 20 years of data under our belt about the performance of supercomputers against at least a single floating-point benchmark from dense linear algebra. Until about 2004, a single model of parallel programming, bulk synchronous using the MPI model, was sufficient to permit translation into reasonable parallel programs for more complex applications. Starting in 2004, however, a confluence of events changed forever the architectural landscape that underpinned MPI. The first half of this article goes into the underlying reasons for these changes, and what they mean for system architectures. The second half then addresses the view going forward in terms of our standard scaling models and their profound implications for future programming and algorithm design.
INDEX TERMS
Computer architecture, Market research, Transistors, Programming, Computational modeling, Memory management, Systems engineering and theory,programming models, Computer architecture, Market research, Transistors, Programming, Computational modeling, Memory management, Systems engineering and theory, scientific computing, exascale, HPC, computer architecture
CITATION
Peter Kogge, John Shalf, "Exascale Computing Trends: Adjusting to the "New Normal"' for Computer Architecture", Computing in Science & Engineering, vol.15, no. 6, pp. 16-26, Nov.-Dec. 2013, doi:10.1109/MCSE.2013.95
REFERENCES
1. Peter M. Kogge et al., “Exascale Computing Study: Technology Challenges in Achieving Exascale Systems,” tech. report CSE 2008-13, Univ. of Notre Dame, Sept. 2008.
2. T. Domany et al., “An Overview of the Bluegene/l Supercomputer,” IBM, 2002.
3. IBM Blue Gene team, “Overview of the IBM Blue Gene/P Project,” IBM J. Research and Development, vol. 52, nos. 1–2, 2008, pp. 199-220.
4. R.A. Haring et al., “The IBM Blue Gene/Q Compute Chip,” IEEE Micro, vol. 32, no. 2, 2012, pp. 48-60.
5. P. Kogge, “Tracking the Effects of Technology and Architecture on Energy through the Top 500, Green 500, and Graph 500,” presentation, Int'l Conf. Supercomputing, 2012.
6. P. Kogge and T. Dysart, “Using the Top500 to Trace and Project Technology and Architecture Trends,” Proc. ACM/IEEE Conf. Supercomputing, ACM, 2011, article no. 28.
7. D. Donofrio et al., “Energy-Efficient Computing for Extreme-Scale Science,” Computer, vol. 42, no. 11, 2009, pp. 62-71.
8. G. Bikshandi et al., “Programming for Parallelism and Locality with Hierarchically Tiled Arrays,” Proc. 11th ACM SIGPLAN Symp. Principles and Practice of Parallel Programming, ACM, 2006, pp. 48-57.
9. Y. Yan et al., “Hierarchical Place Trees: A Portable Abstraction for Task Parallelism and Data Movement,” Proc. 22nd Int'l Conf. Languages and Compilers for Parallel Computing, Springer-Verlag, 2010, pp. 172-187.
10. P. Charles et al., “X10: An Object-Oriented Approach to Non-Uniform Cluster Computing,” SIGPLAN Notices, vol. 40, no. 10, 2005, pp. 519-538.
11. R. Barik et al., “The Habanero Multicore Software Research Project,” Proc. 24th ACM SIGPLAN Conf. Companion on Object Oriented Programming Systems Languages and Applications, ACM, 2009, pp. 735-736.
12. K. Wheeler, “X-Caliber & XGC System Software Research & Development,” SOS Workshop, presentation, 2012; www.cs.sandia.gov/Conferences/SOS16/talks Wheeler.pdf.
13. J. Demmel et al., “Communication-Optimal Parallel and Sequential QR and LU Factorizations,” SIAM J. Scientific Computing, vol. 34, no. 1, 2012, pp. 206-239.
87 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool