The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - September/October (2009 vol.11)
pp: 7-9
Published by the IEEE Computer Society
Thom H. Dunning , University of Illinois, Urbana-Champaign
ABSTRACT
Our hope is that this issue of Computing in Science and Engineering provides you with a sense of the current state of petascale computing, its opportunities and challenges, and its implications.
New instruments are the catalysts for scientific advances. A telescope reveals the universe's largest structures—stars and galaxies—and even provides evidence for black holes, while a microscope reveals some of its smallest, such as the filigree of a cell's working parts. A Doppler radar array sees into developing weather, while MRIs peer into the human body to help us diagnose illness. A particle accelerator illuminates subatomic particles that we otherwise could only imagine, while a wind tunnel allows us to design better airplane wings. But only one scientific tool can do all of these things—a supercomputer.
In Switzerland, researchers are using supercomputers to reverse-engineer the brain. In southern California, supercomputers are helping us understand the impact of earthquakes on the San Andreas fault. Supercomputers are also modeling the simplest of life forms—viruses—at the University of Illinois and the flow of blood through clogged arteries at Brown University. At companies large (Boeing, Motorola, Procter & Gamble) and small (Simulia, Research Triangle Institute, Digital Ribbon), these extraordinary machines are helping improve products and reduce the design time. As science historian James Burke notes in his Connections television series, scientific breakthroughs are born in many places to many parents, but only supercomputers have such a large and varied brood.
We're now entering a new era in computing—the petascale computing era. This past year, petaflops computers were installed at Los Alamos National Laboratory (IBM's Roadrunner computer) and at Oak Ridge National Laboratory (Cray's Jaguar). And the US National Science Foundation has funded a petascale computing system (IBM's Blue Waters) at the University of Illinois that is expected to sustain a petaflops—that is, a quadrillion (10 15) arithmetic operations per second—on real-world science and engineering applications.
Although the petascale era has just begun, it's been abundantly clear for some time that simply building large computers is not enough. We can't just construct the fastest possible machines; we must build comprehensive programs around those machines to address all aspects of the supercomputing enterprise. These programs must examine how we can best

    • enhance the computing system software to make these complex systems more usable by scientists and engineers,

    • create science and engineering applications that can take full advantage of the systems' extraordinary capabilities, and

    • educate a new generation of scientists and engineers who can contribute to developing these capabilities and can use these systems to advance both scientific discovery and engineering practice.

At the University of Illinois' National Center for Supercomputing Applications, we're doing just that. In 2007, working with IBM and the Great Lakes Consortium for Petascale Computation, we embarked on the Blue Waters project. This issue of Computing in Science & Engineering uses the Blue Waters sustained-petascale system and all the work that surrounds it as a touchstone. The issue focuses not on the computer system itself, but on the opportunities that such systems provide, as well as some of the challenges they bring. Roadrunner, Jaguar, and Blue Waters are early entrants in the petascale era and will be followed by many more systems as the foundational technologies of petascale computing mature. Determining what we need to exploit these three systems for science and engineering will provide an important guide—not only for the petascale computing era, but also for the exascale era that will surely follow.
University of Notre Dame's Peter Kogge opens the issue with a historical review of the computing architectures that have brought us to the petascale era. He tracks the key metrics that have been developed over the past 15 years and discusses their implications for how we'll develop scientific applications to run on petascale architectures.
University of Illinois' William Gropp notes that increased performance will come only through increased use of parallelism, and he discusses the challenges associated with programming peta-scale computers with hundreds of thousands of compute cores. Although the increasing availability of new software tools makes this job easier, application developers must still invest considerable effort if they're to take full advantage of petascale computers' extraordinary capabilities.
In their article, University of Michigan's Sharon Glotzer and Shodor's Bob Panoff and Scott Lathrop highlight the current dearth of both formal and informal educational curricula that today's students need to harness petascale computing systems and use them to create scientific and engineering breakthroughs. The authors discuss the Blue Waters project's long-term undergraduate and graduate efforts to help remedy this situation.
To close the issue, a host of authors from a variety of scientific and engineering disciplines offer a brief take on what they've learned so far as long-time high-performance computing users and potential users of petascale systems. They also describe what petascale science will mean for their disciplines, and how it will expand and enhance their research.
It's an exciting time in supercomputing. Government funding agencies, national laboratories, and universities have created an ecosystem of computers to advance scientific discovery and engineering practice. Laboratory-scale computers provide the resources needed for research groups to explore new ideas and develop new applications. Large-scale computing systems at national centers—such as the NSF-funded centers in Texas, Tennessee, and Pennsylvania, and those funded by the US Department of Energy (DOE) in California, New Mexico, and Tennessee—provide the computing resources we need to address a wealth of science and engineering problems. Finally, true leadership-class computing systems like Roadrunner, Jaguar, and Blue Waters let us address the most challenging problems in science and engineering. Our hope is that this issue of Computing in Science & Engineering provides you with a sense of the current state of petascale computing, its opportunities and challenges, and its implications. Enjoy!
Thom H. Dunning, Jr., is director of the National Center for Supercomputing Applications at the University of Illinois, Urbana-Champaign, where he also directs the Institute for Advanced Computing Applications and Technology and is the Distinguished Chair for Research Excellence in Chemistry. His research interests include computational studies of molecular structure, energetics, and dynamics, most recently of a class of molecules exhibiting extended valence (hypervalence). Dunning has a PhD in chemistry from the California Institute of Technology. Contact him at tdunning@ncsa.illinois.edu.
22 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool