Issue No. 05 - September/October (2004 vol. 6)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MCSE.2004.38
Timothy Trucano , Sandia National Laboratories
Douglass Post , Los Alamos National Laboratory
An encompassing goal of contemporary scientific computing is to provide quantitatively accurate predictions that can help society make important decisions. The span of this intended influence includes such widely different fields as astrophysics, weather and climate forecasting, quantitative economic policy, environmental regulation, and performance certification of complex engineered systems such as nuclear power plants. To the degree that we believe accurate computational science and engineering (CSE) will have an increasingly greater impact on problems of societal importance, we must also be concerned about the consequences of inaccurate or wrong CSE. Human life need not necessarily be at risk, but it is highly likely that money, time, environmental quality, and other factors will be.
Bold hopes for CSE's societal impact must also recognize two general questions about its ultimate ability to perform high-consequence computing. The first question asks for confirmation of the mathematical accuracy of presented calculated results and of the error-free operation of the underlying software. This is the problem of verification in CSE. Verification is a mathematical and computer-science challenge. The second general question asks for confirmation that the physical models implemented and solved in CSE are correct representations of the physical phenomena of interest. This is the problem of validation in CSE. Validation is an experimental challenge.
The degree to which CSE contributes to risk-based decision making is in direct proportion to the understood, as well as perceived, credibility of a chosen computational model. Credibility is, in turn, a complex function of the available knowledge about verification and validation (V&V) for that model. In this sense, V&V is the fundamental foundation for consequential scientific computing.
This issue of CiSE presents four articles that touch on some of the technical themes that enable and constrain V&V in computational science. These articles mainly emphasize validation. In the first, "Validating Astrophysical Simulation Codes," by Alan Calder and colleagues, the problem of validation is discussed in the context of computational simulation of energetic astrophysical phenomena, including supernova explosions. Computational models in this field require strongly coupled multiphysics, including radiation transport, compressible mixing hydrodynamics, and exotic relativistic matter equations of state. The authors discuss and illustrate common difficulties in the validation of complex scientific models, including the fact that the equations of the physical phenomena of interest are evolving as a result of computational modeling. Thus, the validation task is strongly coupled to an ongoing scientific research effort. Moreover, the experimental data required for validation necessarily involve a complex hierarchy from well-controlled laboratory-scale experiments to purely observed, but relatively sparse, astrophysical data.
The second article, "Beyond the Means: Validating Climate Models with Higher-Order Statistics," by David W. Pierce, discusses the problem of validating climate models. For large-scale geophysical flows, the statistical variability in observational data creates particular challenges for drawing conclusions about the predictive credibility of computational models through computational-observational comparisons. Pierce summarizes some of the validation issues and research directions for climate modeling, providing an interesting view of the broad problem of dealing with experimental uncertainty in validation. Validation for astrophysical and climate simulations is especially challenging because controlled laboratory experiments are impossible for many problems of interest since we can't conduct controlled supernovae events or climate experiments.
In the third article, "Building PDE Codes to be Verifiable and Validatable," Patrick Roache provides some insight into the coupling of V&V with the construction of computational models. In particular, Roache emphasizes that there are existing methods and technologies for performing V&V that could be effectively applied but require care in the construction of CSE software to take full advantage of them. Roache also discusses in detail one of these methods, the Method of Manufactured Solutions, that should be widely applicable for CSE.
In the final article, "An Experimenter's Perspective on Validating Codes and Models with Experiments Having Shock-Accelerated Fluid Interfaces," Robert Benjamin discusses the nature of the coupling of validation with computational model development from an experimenter's perspective. Benjamin emphasizes the effort required and value delivered by tightly coupling a dedicated validation experiment program to an ongoing CSE project. His major conclusion is that nothing less will suffice for establishing predictive credibility for consequential computational science.
These articles discuss many approaches to solving several V&V-related problems in CSE. The references included within the presented articles offer interested readers further information on this decisively important topic.
Timothy Trucano is a member of the technical staff at Sandia National Laboratories. His research interests include the development and application of quantitative verification and validation methodologies relevant to large-scale computational models of the US Department of Energy National Nuclear Security Administration's Advanced Simulation and Computing Program. Contact him at email@example.com.
Douglass Post is an associate editor in chief of CiSE magazine. His research interests include methodologies for the development of large-scale scientific simulations for the US Department of Defense and for the Controlled Fusion Program. Contact him at firstname.lastname@example.org.