The Community for Technology Leaders

Guest Editors' Introduction: Cosmology and Computation

Joel E. , Louisiana State University
Greg L. , MIT

Pages: pp. 17-20

We have constructed this theme issue on the role of computation in modern cosmological investigations principally because professionals across all disciplines of science and engineering are fascinated by questions related to the universe's origin. However, an issue on this topic is also warranted because over the past decade, rapid technological advancements have triggered revolutionary developments in this field.

Two of our theme articles discuss the computational challenges associated with major observational projects and two detail challenges facing modern simulation efforts. The authors hope that through CiSE's interdisciplinary audience, they will receive constructive feedback from researchers in other fields who are facing similar computational challenges.

With apologies to readers who subscribed to Computers in Physics last year, and gratitude to David Lewin, we are also reprinting the interview with Jeremiah Ostriker (see below) that appeared in CIP's 1998 May/June issue because it provides an excellent introduction to the modern field of computational cosmology.

As you read through the four theme articles, remember that astronomers can directly measure spatial structure at extraordinarily high resolution in only two dimensions—the two angular coordinates that are equivalent to projecting the earth's (or, in the field of cosmology, our Milky Way galaxy's) longitude and latitude system onto the sky. You will encounter, for example, references to square-arcsecond resolution—recall that there are approximately 41,000 square degrees on the sky and each square degree contains 1.3 × 10 7 square arcseconds. However, direct measurements of the distance to various structures and, hence, knowledge about the third spatial dimension are much less straightforwardly obtained. Generally speaking, in discussions of cosmology, the distance to an object is directly related to the velocity at which that object moves away from us as it rides along with the overall Hubble expansion of the universe. Hence, spectroscopic measurements that can provide accurate Doppler (recessional) velocities give positional information in the third dimension, where the recessional velocity relative to the speed of light v/c is related to the Doppler redshift z = Δλ/λ via the general expression, 1 + z = [(1 + v/c)/(1 − v/c)] 1/2 and an object's distance d from our own galaxy can be obtained via the linear relationship, d = v/Ho, where the Hubble constant $H_{\rm o} \approx 1.6 \times 10^{-18}$s−1 is a measure of the universe's present expansion rate.

Remember also that because the speed of light is finite, as we examine structures at greater and greater distances from our galaxy, we view them not as they are at the present time but as they were at a well-defined earlier time t = d/c = v/( cHo). So for example, the cosmic microwave background (CMB) radiation that exhibits a redshift $z \approx 1,000$ and therefore is composed of photons that have traveled unimpeded over a distance of 6 × 10 9 parsecs (2 × 10 10 light-years) gives us direct information about the structure of the universe as it existed approximately 2 × 10 10 years ago!

We hope that these introductory remarks, along with David Lewin's interview with Jeremiah Ostriker, will provide you with an appropriate perspective on this issue's theme and that you will enjoy reading them as much as we have enjoyed bringing them together.

Computing the Universe

Copyright 1998, David I. Lewin. Reprinted from Computers in Physics, May/June 1998.

There is an apocryphal story that a Princeton University philosopher once asked his students during a final exam to "define the universe and give two examples." In a sense, Jeremiah P. Ostriker has spent much of the last decade working on problems similar to the one supposedly set by that philosopher. A computational cosmologist, Ostriker is interested in modeling the processes that governed the evolution of the universe from the big bang to the present day, causing the formation of galaxies and clusters of galaxies. He is the Charles A. Young Professor of Astronomy at Princeton and has been the university's provost since 1995.

Cosmologists "use computers in two quite different ways," says Ostriker. "One is for analyzing data. The data flow from our instrumentation is far, far greater than it was before." He cites the new Sloan Digital Sky Survey, which will survey the sky with telescopes equipped with charge-coupled-device sensors, producing terabytes of data. "You can't analyze that [much] data the way people used to," by hand or using traditional computer methods, he says.

In his own work as a theoretical astrophysicist, however, the role of computers is to simulate the evolution of the universe using "relatively well-defined models for the growth and structure of the universe," which come from fundamental physics, along with the suspected initial conditions. "Ultimately we take the results and compare them against nature," Ostriker says. "There exist a variety of different theories—cold dark matter, hot dark matter, etc.—but what has not been clear are the specific consequences of each model. Computational methods allow the consequences to be developed in a fair amount of detail."

On the basis of observation, astrophysicists conclude that the development of large-scale structure in the universe involved the formation of deep gravitational potential wells. In these wells, matter collapses to 1,000 times the mean density of the universe, in regions several million light-years in size. "These are potential wells in which gas collects and reaches temperatures, after shocks, on the order of 108 degrees," he says. "All of the x-ray satellites see such x-ray-emitting clusters of galaxies that emit on the order of 10 44 ergs per second." The question that needs to be answered is whether researchers would have predicted the existence of such objects from each of the competing cosmological models, Ostriker says.

"You can use the computer to put in the initial conditions, and follow the equations of hydrodynamics, which is essentially all from standard physics, to see whether you get the same number of clusters, the same temperatures, the same separations, the same correlations, as you see in the real sky," he says. On the basis of this comparison to observations, you can eliminate some of the physical models. "If you compute what you expect for the standard cold-dark-matter model with Ω[mass density of the universe] = 1.0, it would predict a very rapid evolution [of structure] such that the number of bright x-ray clusters, with redshift of 0.5, would be very much smaller than it is at redshift zero," Ostriker notes. That is, there would have been far fewer x-ray clusters in the early universe than are seen today. "Well, our telescopes can look that far back, and the number is comparable to the present number." From this, cosmologists deduce that the simple cold-dark-matter model of the universe is wrong. Such testing of models against modern observation "is one of the main reasons we now think that the universe is open, that it doesn't have enough matter to recollapse," he says.

Advances in computational hardware and techniques have allowed cosmologists to tackle harder and harder problems. Twenty years ago, researchers could only solve linear problems. Later they were able to construct two-dimensional models and, most recently, three-dimensional models with time. "Now," Ostriker says, "we can do [three-dimensional models] well." During the 1990s, cosmological simulations increased in resolution from modeling a cube with 32 elements on a side to one with 512 elements on a side.

"Is it always the situation that we don't have enough resolution?" he asks. "Not really. The size of these clusters is about a million parsecs, and the separation between them is about 50 million parsecs, so that you need a box of 100 million parsecs to obtain a representative sample." To resolve individual objects that are on the order of 1 million pc in diameter. Ostriker says that the 100-million-pc box must be divided into at least 106 volume elements, which can be done fairly well with a linear resolution of 512 elements. "At 1,024 3 or 10 9 elements you probably can [model] it as well as you need to," Ostriker says.

Ostriker recently spent five years leading the National Science Foundation-funded Grand Challenge Cosmology Consortium (GC3). The consortium was an effort to apply physics, astrophysics, and computer science to the problem of the evolution of the large-scale structure of the universe. "I think that attempt was pretty successful," says Ostriker, who is currently principal investigator on the computational-cosmology collaboration still under way with the National Center for Supercomputer Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC).

The GC 3 effort involved researchers from the Massachusetts Institute of Technology, the University of California at Santa Cruz, UIUC, and Indiana University. "Each of us had different skills, familiarity with different computer resources, and familiarity with different observations," he says. The collaboration is investigating distributed shared-memory architectures and advanced parallel-programming languages for cosmological modeling.

Besides the x-ray clusters, perhaps the greatest success of the Grand Challenge effort was in understanding the origin of the Lyman-alpha (Lyα) "forest," optical-depth fluctuations in neutral-hydrogen absorption lines that astronomers see on the path to distant quasars. "We knew they came from intervening gas, but no one had understood them or modeled them very well," Ostriker says. On the basis of their models, and those of other researchers, astronomers now believe that the Lyα-forest lines are observed when the line of sight to a quasar passes through a region of density waves in the intergalactic medium.

"The fact that our computer simulations of some models fit this enormous wealth of observations (from the Keck telescope and others) so well proves that at least some fraction of the models really have the physics right," the astrophysicist says.

Ostriker, a New York City native, started out as a theoretical astrophysicist. He received his undergraduate degree from Harvard University in 1959, then trained in the hydrodynamic aspects of astrophysics with Subrahmanyan Chandrasekar at the University of Chicago, from which he received his PhD in 1964. Following a postdoctoral fellowship at Cambridge University in England, he joined the Princeton faculty as a research associate and lecturer in 1965 and became a full professor in 1971.

During his career, Ostriker has worked on a variety of problems—on pulsars with James Gunn, on x-ray sources, and more recently on the physical dynamics of shock-heated gases in the interstellar medium. All these problems involved the application of hydrodynamics to astrophysics. It was a small step for him to go from these problems to applying the same techniques on a larger scale to cosmological questions. "After all, the physical processes were similar," he says.

"I realized that with the rapid development of computers, those problems that had been treated with handwaving or in an ideological or philosophical way were now susceptible to detailed numerical treatment," Ostriker says. "That was very exciting for me." He began his computational-cosmology work around 1990 and published his first paper in the field in 1992. "The whole field is only a little older than that," he notes, with dark-matter studies without the use of hydrodynamics beginning in the 1980s. Ostriker attributes the increasing success of computational cosmology not to the brilliance of the GC 3 consortium, but to the rapid improvement in computers and software. "It's the same as in many other areas of computational physics where we've known the equations, we just haven't known how to get solutions."

Because these problems require large datasets, memory and input/output requirements are the most critical issues. "The technical challenge, so to speak, is exactly the opposite of finding large primes," Ostriker says. "There, you have a very large number of operations on a very small number of numbers; but here, you are required to consider the universe at every time step at quite a bit of detail." This means that the model might have 10 9 fluid elements and produce 10 10 items to store at each of 10 3−10 4 time steps.

"The I/O issues are paramount, and memory issues are paramount," Ostriker says. "A very, very fast, small-memory machine would not be useful for doing these models." Visualization of the data helps in the critical issue of understanding the model, he says.

"Suppose that you had the perfect simulation and it exactly reproduced the real universe, and when you visualized it, it looked just like the real universe," he says. "You wouldn't have learned very much, because you cannot understand the real universe very well just by looking at it. You have to be able to take it apart and look at all the moving elements." For example, would the universe stay the same in the absence of magnetic fields? "In the end, quantitative questions require quantitative analysis. Visualization is useful in pointing out errors, in gaining insight, and in explaining to the public."

Some of the GC 3 simulations have been made available to the public. The IMAX film Cosmic Voyage, released last year, included several segments developed as part of the GC 3 effort. Ostriker and his associates are developing additional visualizations of cosmic phenomena for a film currently in development, The End of the Dark Ages, which portrays the epoch when the first stars were formed.

About the Authors

Joel E. Tohline is a professor in the Department of Physics and Astronomy at Louisiana State University. He received his BS in physics from the Centenary College of Louisiana, and his PhD in astronomy from the University of California, Santa Cruz. Contact him at,
62 ms
(Ver 3.x)