New Frontiers in Leadership Computing

James J. Hack, Oak Ridge National Laboratory
Michael E. Papka, Argonne National Laboratory and Northern Illinois University

Pages: 10–12

Abstract—The guest editors present the second part of a two-part special issue on leadership computing.

Keywords—leadership computing; leadership systems; Leadership Computing Facility; LCF; scientific computing


Scientist and inventor Louis Pasteur famously spoke of the way discoveries and insights come about. “Chance,” he said, “favors only the prepared mind.”

The diverse methods scientists use to study the natural world have expanded in at least one significant way since Pasteur made his breakthroughs in the fields of chemistry and microbiology a century and a half ago—the introduction of high-performance computing, or HPC, into the process of scientific inquiry. Insights may still favor the prepared, but within the scope of modern scientific inquiry, this preparation will increasingly include computational training or the assistance of a computational scientist.

Leadership in HPC

Leadership in HPC gives any nation an enormous competitive advantage in nearly every sector of the global economy, which is why many nations are heavily investing in their domestic and collective supercomputing capabilities. For the past decade this advantage has been firmly held by the United States, ever since Congress acted in 2004 to establish the Leadership Computing Facility, or LCF. This US Department of Energy user facility, operated as two centers in Illinois and in Tennessee, represents the world's largest computational resources dedicated to open science.

Historically, DOE has been the designer, builder, and operator of the nation's most advanced large-scale R&D user facilities. And while several advanced computing architectures have factored prominently in research conducted at national laboratories since the early 1980s, the creation of the LCF formalized HPC as a nationally competitive mode of making discoveries and technological breakthroughs, joining the ranks of light sources, accelerators, colliders, experimental fusion reactors, and nanoscale research centers. The advances and innovations that come out of DOE user facilities each year are the human achievements that are helping find practical solutions to society's biggest problems.

The most important function of the Leadership Computing Facility is to align leadership systems with the needs and goals of breakthrough science projects. This means priority is given to jobs that require a large fraction of the entire machine, need to run for long periods of time, or can't be accomplished without the resources available at the facility.

The current generation of leadership systems is diverse by design: Argonne National Laboratory operates a massively parallel IBM Blue Gene/Q machine. Oak Ridge National Laboratory operates a Cray XK7, the first major supercomputing system to utilize a hybrid system of conventional CPUs and GPU accelerators. Both petascale machines enable precision calculations and long-timescale simulations, and both are manyfold times more powerful than their predecessors of just a few years ago, while occupying roughly the same space and drawing the same power.

For the computational science and engineering community, these centers host higher-fidelity physical models and numerical algorithms, more efficient and higher quality software, and better in-depth data analytics. Scientific breakthroughs often require the ability to both expand an investigation in scale and to refine it using more predictive models and techniques to home in on what is truly interesting—to dig into the outliers and look deeper into the anomalies.

To do so requires HPC capabilities. A physical process that might be revealed only in nanoseconds instead of microseconds, for example, may demand system capabilities of petaflops instead of teraflops. Access to such capabilities is what makes each new generation of leadership system so exciting.

Recent Breakthroughs

To gain a better sense of this technology's possibilities, in this issue we chose four articles where leadership computing yielded significant outcomes. In the article, “Scalable Implicit Flow Solver for Realistic Wing Simulations with Flow Control,” Michel Rasquin and his colleagues describe the strong scaling performance of a massively parallel turbulent flow solver on the entire Blue Gene/Q system. The work demonstrates a new capability enabled by a leadership system to significantly improve the aerodynamics performance of airplane wing designs with active flow control.

In “High-Resolution Simulation of Pore Scale Reactive Transport Processes Associated with Carbon Sequestration,” David Trebotich and his colleagues present a new approach for modeling subsurface flow and transport. The authors wanted to study fluid-mineral interactions at the microscopic pore scale to understand the control of critical aspects of flow and transport in porous rock media, in particular, as applied to the geologic control of $_${\hbox{CO}}_2$_$ injected into the earth. The authors demonstrate the scalability of their state-of-the-art algorithm to a massively parallel system and show results from pore scale flow and transport in realistic pore space obtained from image data.

In “High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics,” Panagiotis Spentzouris and his colleagues describe ways the accelerator community is using computational tools and HPC to advance design. The authors not only provide an example of a technical contribution made by large-scale computing to high-energy physics—specifically accelerator design—but also present an overview of how the code was made to perform well on modern architectures.

In “pF3D Simulations of Laser-Plasma Interactions in National Ignition Facility Experiments,” Steven Langer and his colleagues highlight two computational aspects of scientific simulations—mapping processes to nodes in the interconnect and optimizing parallel I/O—that become important at very large levels of parallelism. The article also clearly supports the need for this computational scale to address the basic laser-plasma physics relevant to the National Ignition Facility at Lawrence Livermore National Laboratory.

The Road to Exascale

The LCF is a key component of the US roadmap to achieving exascale computing for the open science community. A vision this powerful will require close collaboration between scientists and industry. The next phase will bring pre-exascale systems with an aggregate capability of 200–400 petaflops, which will expand to meet the needs of the growing class of large data science problems where the volume of data requires leadership-class resources. The LCF centers are also helping prepare future computational scientists to use current and future-generation leadership systems, through training courses and recruitment of domain scientists to work with the science teams using the resources.

From the same famous French scientist whose lifelong dedication to science led to several world-changing discoveries comes another timeless quote: “Science knows no country, because knowledge belongs to humanity, and is the torch which illuminates the world.”

The LCF is one of the nation's top investments in finding solutions to energy and environmental problems. The race to exascale may be a point of national pride, but more importantly, it represents a greater potential to improve human health, develop alternative energy sources, and help mitigate environmental disasters—achievements that will benefit every nation.

As guest editors of this issue, we thank the editorial team at CiSE, whose expertise and professionalism made this special issue possible. We're especially grateful to the CiSE Editor in Chief, George Thiruvathukal, for devoting a two-part special issue to the topic of leadership computing and for showcasing the exciting work going on at the LCF centers. We look forward to contributing a regular column on leadership computing starting next spring.

James J. Hack directs the National Center for Computational Sciences at Oak Ridge National Laboratory, which houses the Oak Ridge Leadership Computing Facility. He has served as an editor for the Journal of Climate, given testimony to Congress on the topic of climate change, and recently completed participation as a member of a National Research Council Study on A National Strategy for Advancing Climate Modeling. He's also actively involved in a number of national and international advisory and steering committees, among which include the Department of Energy Office of Science and National Science Foundation appointments. His research interests include physical parameterization techniques, numerical methods, and diagnostic methods for evaluating simulation quality. Hack has a PhD in atmospheric dynamics from Colorado State University. Contact him at jhack@ornl.gov.
Michael E. Papka directs the Argonne Leadership Computing Facility at Argonne National Laboratory, where he also serves as Deputy Associate Laboratory Director for Computing, Environment, and Life Sciences. He is a Senior Fellow of the Computation Institute, a joint institute of the University of Chicago and Argonne National Laboratory, and an associate professor of computer science at Northern Illinois University. His research interests include the visualization and analysis of large data from simulation and experimental sources. Papka has a PhD in computer science from the University of Chicago. Contact him at papka@anl.gov.
FULL ARTICLE
CITATIONS
70 ms
(Ver 3.x)