The Community for Technology Leaders
RSS Icon
Subscribe

News

(HTML)
Issue No.01 - January/February (2005 vol.7)
pp: 5-7
Published by the IEEE Computer Society
ABSTRACT
<div>Modeling El Nino: A Force Behind World Weather</div><p>Today, scientists can predict the Earth's climate months ahead of time. A new synergy between two competing analysis methods, statistical and dynamic, is helping push forecasts out even further. To truly be able to predict accurate forecasts years into the future, however, scientist must began accurately forecasting the weather phenomenon El Nino.</p>




MODELING EL NIÑO: A FORCE BEHIND WORLD WEATHER
Pam Frost Gorder
Forget Sim City. If you really want to play God, try your hand at a global climate model (GCM). These simulations link long-term weather patterns on a planetary scale, showing how even small temperature or pressure changes in one location can lead to a sunny day—or a natural disaster—on the other side of the world.
Today, scientists can predict the Earth's climate months ahead of time. A new synergy between two competing analysis methods is helping push forecasts out even further.
Imagine, for instance, a single computer model that could suggest years in advance how much snow will fall during ski season in Aspen, whether droughts will choke off rice production in Southeast Asia, and how fast the ice cap will melt on Kilimanjaro. All three predictions require an accurate forecast of one very complex climatic phenomenon—El Niño.
Air Conditioning
When it comes to forecasting an El Niño, the buck stops at the National Oceanic and Atmospheric Administration (see the sidebar for an explanation of El Niño). Hua-Lu Pan leads the climate modeling team at the Environmental Modeling Center, where all official NOAA forecasts begin. A new climate forecast system went online at the center in August.
The new system grew out of NOAA's weather forecasting system, which gathers global atmospheric data four times daily to form the best possible initial conditions for its weather-prediction models. The modelers then give their predictions to the National Weather Service and the rest of the community.
For climate predictions, Pan says, they calibrate those forecasts with knowledge based on retrospective forecasts going back more than 20 years.
"We think weather and climate are intimately linked, and when we make the weather model better, we should also be making the climate model better," Pan says.
His team imported an ocean model from NOAA's Geophysical Fluid Dynamics Laboratory and merged it with the atmospheric model to make a coupled model.
"Essentially it's the same model, but we make a forecast out to 10 months instead of 10 days. Now, of course, we can't even do a 10-day forecast accurately," he says. But because the ocean is slower to change than the atmosphere, he feels scientists can confidently look ahead month to month.
Once a month, NOAA's Climate Prediction Center (CPC) uses that forecast to predict the likelihood of large-scale events such as El Niño. In September, it concluded that sea surface temperatures in the equatorial Pacific had risen 0.5 degrees Celsius (about 1 degree Fahrenheit) above average—enough to suggest the beginnings of a weak El Niño for winter 2005.
Grappling with Chaos
Pan's model is based on physics, so it's dynamic. But the CPC also uses several other tools to make its forecast, some of which are statistical. Both dynamic and statistical methods have been competing since weather forecasting using computers began 50 years ago.
Statistical models enjoyed a clear advantage early on because they could predict seasonal changes on the basis of considerable historical data. Dynamic models, however, tried to derive weather and climate conditions essentially from scratch. They were limited by knowledge of oceanic and atmospheric physics, and dynamicists couldn't access much of the data from satellites and other instruments that could improve that knowledge. They also didn't have the computing power to run their models at high resolution.
Today, those obstacles are vanishing and dynamic models are catching up.
"The first dynamic models were very bad, and it took a long time for people to have faith in them," Pan says. "We've progressed to the point where people can rely quite a bit on the dynamical forecasts." Dynamic climate forecasts, though, have lagged behind through the years.
In the mid 1980s, Mark Cane assembled the first-ever computer model of El Niño. Now a professor of Earth climate sciences at Lamont-Doherty Earth Observatory, Cane and then-student Steve Zebiak got around limits in computing power by exploiting El Niño's slow three- to seven-year cycle and running the simulation at a large time step: 10 days as opposed to the typical few hours.
Although those early two-year forecasts consumed many hours on an early Masscomp workstation, Cane now routinely runs 100,000-year forecasts on his desktop PC. Those simulations take close to a week and help him and his colleagues incorporate ultra-long-term climate variables such as small changes in the Earth's orbit. He's also trying to learn more about why some decades have several El Niños while others don't.
Cane agrees that physical or dynamic models such as his are gaining on the statistical models.
"I believe that ultimately the physical models will win because they incorporate a tremendous amount of knowledge that's difficult if not impossible to capture in a statistical scheme," he says.
So what role will statistical models have in the future?
"Let the physical models get that much better first, and then we can worry about it," he says with a laugh. "But I think we will end up with a statistical-dynamical combination for a number of reasons."
The primary reason is chaos.
The Earth's climate naturally exhibits a lot of variability, and at some point, even good dynamic models veer from reality. Statistics helps scientists get a handle on that variability. New statistical methods are emerging to quantify forecast uncertainty so that people can better decide how much to rely on forecasts.
At NOAA, Pan points to the success of some recent statistical models based on dynamic models for weather forecasts. "The statistical and dynamical parts do more than complement each other," he says. "They're both going to have a place in the forecast business."
At the Scripps Institute of Oceanography, a hybrid El Niño model is working toward two-year forecasts. Programmer David Pierce says the model is unusual because the ocean portion is dynamic, and the atmosphere is statistical. From his perspective, the only difference is that the statistical portion had to be trained on observed data—in this case, 30 years of Scripps' forecasts—for 500 years of model running time. "You just have to plug through it, and that takes a while," he says.
Spin Cycles
When Lamont-Doherty's Cane isn't running his model—an updated version of the one he developed in the 1980s—on his desktop machine, he uses a Beowulf cluster. This is necessary for a project that looks closely at droughts in the western US and requires a more detailed atmosphere.
NOAA runs its mammoth GCMs in short, 15-minute time steps, so nothing less than a supercomputer will do. Pan's team uses a partitioned IBM SP parallel system dubbed "Frost and Snow." With 640 processors running at 1.3 GHz, it's more than twice as fast as the system it replaced in 2002, but as of October 2004, it was about to be replaced by yet another system with almost twice as many processors running at 1.7 GHz. (Forecasting for the government has its advantages, Pan says: "We get frequent upgrades.")
Over the years, NOAA has worked with different communications schemes—vector systems, which process large arrays (vectors) of data, the classic message-passing interface (MPI), and the newer OpenMP, which allows for shared memory among processors. Pan suspects his team might mix these schemes in the future—for example, MPI could pass messages efficiently between processor nodes, but within the nodes, OpenMP might work better. The climate team will rewrite its model to fully exploit whatever the new architecture can provide.
That's standard procedure, Pan says. Since the first digital computers went operational in 1945, scientists have been trying to squeeze weather simulations out of them. Now, weather and climate modeling are driving forces behind the development of faster, more powerful computers. The ultimate goal is a GCM that captures all the planet's physics—ocean, atmosphere, clouds, solar radiation, ice, vegetation, and so on. And that's a big job for any computer system.
Cane sums it up: "Snow fall, snow cover, snow melt—each one requires that much more code."
Building models that can forecast far into the future is another challenge altogether.
Cause and Effect
Scientists would like to make their El Niño forecasts as long-range as possible so communities can prepare for storms and floods—farmers could choose crops depending on expected rain fall; government agencies could in turn give better economic forecasts. Many different strategies for improving models are in the works.
NOAA is focusing on improving the accuracy of its six-month forecasts, which is tough during a year like 2004. "[With] a strong El Niño, prediction is easy," Pan says. "The signal is strong by April or May, and you can make a prediction way ahead of time." Diagnosing a weak El Niño is much more difficult.
For the past few years, his team has been working on improving regional responses in the coupled model because they're critical for gauging El Niño's effects. "We may have a pretty good idea of where El Niño is going to go, but to translate that to a forecast for whether it's going to be warmer or wetter than normal in California or Texas—that depends on how good the models are for those areas," he says.
At Scripps, Pierce is working on a way to make the models that play out El Niño's effects less computationally intensive. To save on space required for output data, modelers could look at climate variables as distributions instead of individual data points, he says. A distribution shaped like a bell curve, for example, can be described with only two numbers—the average value for the characteristic being measured and the curve's width. "If you're worried about flooding, say, then you would look at the distribution of the highest precipitation days for a region and see if it looks very different from the average high precipitation days in a year," he explains.
Kevin Trenberth, head of the Climate Analysis Section at the National Center for Atmospheric Research, held a November workshop to help scientists better exploit the wealth of observational data available from satellites and ocean-going instruments. He'd like to see world organizations in various disciplines—from geology to oceanography to atmospheric sciences—pool their data to obtain a better picture of what's really happening in the world and use that knowledge to improve models.
At Lamont-Doherty, Cane hints that senior research scientist Dake Chen recently developed a way to make their El Niño model much more realistic. There will always be a natural limit to how well scientists can forecast the Earth's climate, however.
"At some point, you can't predict any better because the system won't allow it," he concludes. "But I don't think we're close to that limit yet."
Pam Frost Gorder is a freelance science writer based in Columbus, Ohio.
20 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool