This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Exascale Computing Trends: Adjusting to the "New Normal"' for Computer Architecture
Nov.-Dec. 2013 (vol. 15 no. 6)
pp. 16-26
Peter Kogge, Univ. of Notre Dame, Notre Dame, IN, USA
John Shalf, Lawrence Berkeley Nat. Lab., Berkeley, CA, USA
We now have 20 years of data under our belt about the performance of supercomputers against at least a single floating-point benchmark from dense linear algebra. Until about 2004, a single model of parallel programming, bulk synchronous using the MPI model, was sufficient to permit translation into reasonable parallel programs for more complex applications. Starting in 2004, however, a confluence of events changed forever the architectural landscape that underpinned MPI. The first half of this article goes into the underlying reasons for these changes, and what they mean for system architectures. The second half then addresses the view going forward in terms of our standard scaling models and their profound implications for future programming and algorithm design.
Index Terms:
Computer architecture,Market research,Transistors,Programming,Computational modeling,Memory management,Systems engineering and theory,programming models,Computer architecture,Market research,Transistors,Programming,Computational modeling,Memory management,Systems engineering and theory,scientific computing,exascale,HPC,computer architecture
Citation:
Peter Kogge, John Shalf, "Exascale Computing Trends: Adjusting to the "New Normal"' for Computer Architecture," Computing in Science and Engineering, vol. 15, no. 6, pp. 16-26, Nov.-Dec. 2013, doi:10.1109/MCSE.2013.95
Usage of this product signifies your acceptance of the Terms of Use.