The Community for Technology Leaders
Green Image
Issue No. 06 - Nov.-Dec. (2013 vol. 15)
ISSN: 1521-9615
pp: 12-15
Steven Gottlieb , Indiana University
Thomas Sterling , Indiana University
The history of high-performance computing (HPC) spans almost seven decades, and has seen a factor of 10 trillion increase in speed since the first-generation vacuum-tube-based von Neumann computers of the late 1940s. This extraordinary advance greatly exceeds that of any other human technology. And it's not that we initially got it wrong and then later finally got it right. Rather, each decade saw a performance gain of at least two orders of magnitude, steadily harnessing the accumulating advances of the basic enabling device technologies in logic, memory, and data communication. Despite this apparent consistency of progress, the technologies driving performance growth as well as the innovations in programming models and operational methods that have delivered it have changed markedly and repeatedly to sustain this growth.
In the most recent epoch, after 20 years of improvements to the multiprocessor, distributed-memory message-passing strategy, significant changes are taking place, again driven by technological change. Teraflops were achieved in 1997 and Petaflops in 2008. This last milestone was accomplished without significant disruption to programmers employing conventional methods, despite a dramatic change occurring in 2004/2005, in which the speed of the individual processor core flat-lined due to limitations in power consumption. However, it was clear, even then, that scaling current technologies to exaflops through incremental extensions of past practices would consume much too much power to be practical. This special issue of CiSE addresses the deep questions of the challenges currently facing sustained performance growth to exascale and beyond, the opportunities to do so, the new architecture designs that might make it possible, and the programming models and support software methods that will employ it for future applications in science, technology, commerce, and defense.
What the Future Holds—and Still Needs
The advent of multicore sockets and GPU accelerators offer possible performance growth through raw semiconductor technology improvements, but also impose unprecedented challenges in efficiency, scaling, power, and reliability, as well as programmer productivity. Achieving exaflops speed will require new programming techniques, but what of the billions of dollars of investment in past software development and mainstream markets? How will the field of HPC continue to leverage the strength of COTS technologies and the economy of scale of mass-produced computing and memory components if exascale may need something different? Will the highest-end systems become increasingly limited in the classes of problems they can serve, or will new execution models, architectures, and programming techniques evolve to meet these challenges? This special issue of CiSE brings together expert views to illuminate the possible approaches.
Before we get more deeply into the challenges of exascale computing, we should talk briefly about the need. From November 2008 to October 2009, there was a series of eight Scientific Grand Challenges Workshops (sponsored by the US Department of Energy Office of Advanced Scientific Computing Research and coordinated by Paul Messina) that asked scientists to assess their need for exascale computing. The workshops covered climate science, high-energy physics, nuclear physics, fusion energy, nuclear energy, biology, material science and chemistry, and national security. The workshop reports ( detail what could be done with exascale computers.
In December 2009, Rick Stevens and Andy White led a workshop on architectures and technology for extreme-scale computing that brought together scientists and computer scientists from industry, national laboratories, and universities to examine the challenges, some of the potential solutions, and the research that would need to be done to achieve exascale computing by 2018. A key concept is the codesign of the hardware, system software, and applications software to assure that they all work together. Three codesign teams have been funded to study materials in extreme environments, advanced reactors, and combustion in turbulence ( There's also an ongoing international effort in software design (
If you don't expect to be computing at the exascale level, is there a reason for you to be interested in the current issue? We think so—because the technology that will be needed for exascale will require great improvements in energy efficiency and cost-effectiveness at the node level, this technology might also wind up on your desktop, and departmental systems at the petaflop/s level might become affordable. Although your level of concurrency might be smaller than required for exascale, it will be much higher than what's required on today's desktops.
Contributions to This Special Issue
We kick off this issue with “Exascale Computing Trends: Adjusting to the ‘New Normal’ for Computer Architecture,” by Peter Kogge and John Shalf. Kogge chaired a 2008 DARPA-funded study on technology challenges of building exascale systems ( Kogge and Shalf detail why the single-processor speed increases we've seen in the past won't continue, and how the key to exascale computing is a vastly increased level of parallelism and much greater attention to data movement. They also discuss how many picojoules a floating-point operation or a dynamic RAM (DRAM) access cost now and in the future.
The second article—“Programming for Exascale Computers,” by Bill Gropp and Marc Snir—deals with the quite significant challenges ahead for application developers who want to know whether their codes will need to be completely rewritten. At this point, the answer isn't completely clear, but Gropp and Snir summarize what programmers will be dealing with, what approaches will be feasible, and the pros and cons of trying to evolve current code to exascale hardware.
Finally, in “PaRSEC: Exploiting Heterogeneity to Enhance Scalability,” George Bosilca and his colleagues describe a runtime system and technique for programming that help the application programmer spend less time concentrating on the details of the hardware and how the data needs to be distributed among the processors. This approach to programming should be a real help when dealing with heterogenous compute nodes, and can be tested and used well before the dawn of exascale hardware.
We hope that you enjoy this issue, and we can assure you that you'll be hearing about exascale computing for quite some time. Although the authors for this special issue topic are currently working in the US, the effort to produce hardware and software for computing at the exascale level is an international one. We wouldn't be surprised to see the first exascale computer produced outside the US.
Steven Gottlieb is a distinguished professor of physics at Indiana University, where he directs the PhD minor in scientific computing. He's also the Associate Editor in Chief of CiSE. His research is in lattice quantum chromodynamics (QCD). Gottlieb has a PhD in physics from Princeton University. Contact him at
Thomas Sterling is a professor of informatics and computing at Indiana University. He also serves as the executive associate director of the Center for Research in Extreme-Scale Technologies (CREST) and as its chief scientist. He has conducted research in parallel computing systems in industry, academia, and government centers, and currently leads a team of researchers at Indiana University to derive the advanced ParalleX execution model and develop a proof-of-concept reference implementation to enable a new generation of extreme-scale computing systems and applications. Sterling has a PhD in computer science from MIT. He's the recent inaugural winner of the Exascale Vanguard Award. Contact him at
209 ms
(Ver 3.3 (11022016))