The Community for Technology Leaders
Green Image
Issue No. 06 - Nov.-Dec. (2013 vol. 15)
ISSN: 1521-9615
pp: 16-26
Peter Kogge , Univ. of Notre Dame, Notre Dame, IN, USA
John Shalf , Lawrence Berkeley Nat. Lab., Berkeley, CA, USA
ABSTRACT
We now have 20 years of data under our belt about the performance of supercomputers against at least a single floating-point benchmark from dense linear algebra. Until about 2004, a single model of parallel programming, bulk synchronous using the MPI model, was sufficient to permit translation into reasonable parallel programs for more complex applications. Starting in 2004, however, a confluence of events changed forever the architectural landscape that underpinned MPI. The first half of this article goes into the underlying reasons for these changes, and what they mean for system architectures. The second half then addresses the view going forward in terms of our standard scaling models and their profound implications for future programming and algorithm design.
INDEX TERMS
Computer architecture, Market research, Transistors, Programming, Computational modeling, Memory management, Systems engineering and theory,programming models, Computer architecture, Market research, Transistors, Programming, Computational modeling, Memory management, Systems engineering and theory, scientific computing, exascale, HPC, computer architecture
CITATION
Peter Kogge, John Shalf, "Exascale Computing Trends: Adjusting to the "New Normal"' for Computer Architecture", Computing in Science & Engineering, vol. 15, no. , pp. 16-26, Nov.-Dec. 2013, doi:10.1109/MCSE.2013.95
268 ms
(Ver )