Issue No. 02 - March/April (2009 vol. 11)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MCSE.2009.22
Richard T. Kouzes , Pacific Northwest National Laboratory
Alan C. Calder , Stony Brook University
In recent years, a common concern among computational scientists has been the need to prepare for computing at the petaflop scale and beyond. Many question the capabilities and validity of extant numerical methods for multiscale and multiphysics (no need to even mention multidimensional) simulations taking full advantage of coming architectures. Computational scientists are also concerned that advanced applications will require component integration across disciplinary boundaries, 1 present difficult challenges in verifying and validating components and integrated simulations, 2 and that the entire field may suffer from the shortage of a well-prepared high-performance computing workforce. 3 With the articles presented in this special astrophysics-themed issue, we assert that the process of achieving multiscale, multiphysics, petaflop-scale simulation capability is well under way, and we hope that the articles serve to allay some concerns.
The first article describes modeling supernovae—spectacular stellar explosions that for a short time rival their host galaxy's luminosity. These events drive galactic isotopic evolution by producing and disseminating heavy elements, produce neutron stars or black holes (in some cases), and are associated with the enigmatic explosions known as gamma ray bursts (in other cases). Motivated largely by the use of one class of these events (Type Ia) as distance indicators ("standard candles") for cosmological studies, observational surveys of these events are gathering data at an incredible rate. Similarly, theories about these events are progressing rapidly, enabled primarily by large-scale parallel computing. The last article addresses a critical need in this process—the need for high-performance parallel analytical tools for the large data sets that arise from both observation and simulation. Together, these articles present a sampling of contemporary methods in astrophysics with applications well beyond the field of astrophysics.
In "Treating Unresolvable Flame Physics in Simulations of Thermonuclear Supernovae," Dean M. Townsley describes one piece of the puzzle of thermonuclear (Type Ia) supernovae. These explosions occur when a compact star known as a white dwarf undergoes a runaway thermonuclear explosion. This paradigm is widely accepted as the explanation for most events, but robust models remain elusive. Townsley's article addresses the need to include subcentimeter-scale turbulent nuclear flame fronts in simulations of exploding roughly Earth-sized white dwarf stars. The article describes subgrid-scale methods for modeling such reaction fronts that can't be resolved in macroscopic simulations, which includes most terrestrial combustion applications in addition to astrophysical applications.
"A New Low Mach Number Approach in Astrophysics," by Ann S. Almgren, John B. Bell, Andy Nonaka, and Michael Zingale, addresses another piece of the thermonuclear supernova puzzle—convection in the thermonuclear runaway's early stages. The article describes how the disparate time scales between a sound wave crossing a computational zone and the slow subsonic flow of convection in the precursor star present a significant modeling challenge. Most available hydrodynamic methods either address compressible (high Mach number) flow with an explicit time-integration scheme or assume incompressibility, but both approaches are inappropriate for this application. The article presents a new, state-of-the-art method for filtering sound waves while retaining the ability to address compressibility effects—essential features of many astrophysical applications.
A point worth stressing here is that the articles on thermonuclear supernovae present different methods for addressing parts of the same problem. The complexity of thermonuclear supernovae and the range of relevant length and time scales necessitate the use of different algorithms, and the methods described in the articles address only two stages of the evolution—the convective early stage of the runaway and the explosion itself. Simulating a complete event would require additional methodology for evolving the progenitor system and generating the light curve—the actual outburst's observable. Similar issues occur with core-collapse supernovae. We note that generating consistent results between the methods applied to a given problem is a long-standing challenge to effectively using petaflop-scale systems.
In "Stellar Core Collapse: A Case Study in the Design of Numerical Algorithms for Scalable Radiation Hydrodynamics," Eric S. Myra, F. Douglas Swesty, and Dennis C. Smolarski address another unsolved problem in astrophysics—core-collapse supernovae. These explosions occur when the core of an evolved massive star collapses under its own weight. It's generally accepted that gravitationally binding energy released by the core's collapse powers a shock wave that eventually explodes the star, but the process by which this occurs is far from completely understood. There are many pieces to this puzzle, including relativistic effects, magnetic fields, standing accretion shock instabilities, acoustic modes, and the piece addressed by this article, radiation hydrodynamics. In this case, the radiation is in the form of weakly interacting particles known as neutrinos that emanate from the collapsed core, and the contribution of this radiation to the explosion dynamics is the subject of contemporary research. The problem is especially challenging because physics dictates that the neutrino radiation isn't in equilibrium with the dense stellar matter and is therefore described by the Boltzmann transport equation (BTE). Solution to the full (seven-dimensional) BTE isn't possible on present architectures, but this article describes and justifies an approximate algorithm.
The final article in this special issue addresses an important piece of any computational science puzzle—a parallel infrastructure for analysis. In "The Beowulf Analysis Symbolic Interface: Interactive Parallel Data Analysis for Everyone," Enrico Vesperini, David M. Goldberg, Stephen McMillan, James Dura, and Douglas Jones take on the issue of parallel software development and the dissemination lagging behind the availability of affordable parallel machines. The article presents the Beowulf Analysis Symbolic INterface (BASIN), a suite of parallel computational tools for the management, analysis, and visualization of large datasets. This infrastructure is capable of taking advantage of contemporary clusters and multicore PCs and offers a powerful alternative to accepted serial tools and libraries.
We believe these articles serve as a snapshot capturing some of the capabilities of computational science at this point in time. The astrophysics problems described in the articles are but a few of computational astrophysics' and cosmology's interesting problems, which are themselves but a sampling of computational science. One example is worth mentioning—the hydrodynamics methods described in these articles are all grid based, and it should be noted that considerable progress is also being made with particle-based methods.
Alan C. Calder is an assistant professor at Stony Brook University and a member of the New York Center for Computational Sciences. His research interests include supernovae, high-energy density physics, and the verification and validation of multiphysics codes and simulations. Calder has a PhD in physics from Vanderbilt University. Contact him at email@example.com.
Richard T. Kouzes is a laboratory fellow at Pacific Northwest National Laboratory and an adjunct professor at Washington State University. His research interests include neutrino physics, radiation detection, and homeland security applications. Kouzes has a PhD in physics from Princeton University. Contact him at firstname.lastname@example.org.