, US Department of Defense High Performance Computing Modernization Program
Pages: pp. 10-11
The US Department of Defense is reducing its dependence on the traditional empirical "design, build, break, and fix" paradigm for the design and test of weapons systems by supplementing empirical testing with computational science and engineering. This could lead to better-optimized designs with fewer flaws, thus reducing costly rework and schedule delays because design engineers can explore a much wider range of design options much more quickly.
Researchers are also using computational applications to explore phenomena that can't be subjected to controlled tests, such as predicting operational weather conditions and climate change. They're also using them for both basic and applied research in other areas, such as materials science, biomedicine, networking and communications, aerodynamics, fluid mechanics, acoustics, and electromagnetics, to name a few. This issue of CiSE highlights some examples of such applications.
Even small improvements in gas-turbine efficiency can result in substantial savings and improved performance over the life of an aircraft. But performance optimization is difficult due to the challenges of measuring gas flow and compressor properties, not to mention the time and effort required to construct modified engine systems. Steven Gorrell and his colleagues examine the interactions between the rows of turbine blades in gas-turbine compressors that lead to performance degradation due to turbulence. They're among the first to include time-dependent effects, such as the gas flow's "unsteady" motion. Such studies are proving useful for minimizing entropy production and energy dissipation, leading to reduced turbine heat and mechanical loads as well as better, more efficient jet-engine performance.
High-power microwave amplifiers are key elements in several weapons systems, including nonlethal directed-energy weapons and radar systems. Timothy Fleming and his colleagues describe the application of a relativistic particle-in-cell code to analyze the detailed behavior of candidate amplifier-tube designs. They used simulations to identify the key elements of good amplifier performance, study various design options, and produce better amplifiers than possible by following an empirical "design and test" approach.
A key question when studying systems that can't be controlled via experiments and tests—such as modeling global warming in the Arctic Ocean—is how to properly include the effects of weather and climate outside the calculational region as well as the effects of complicated phenomena such as fresh-water sources. Wieslaw Maslowski and his colleagues assess the sensitivity of Arctic sea-ice coverage to models that "relax" ocean temperature to match measured trends. They found Arctic ice coverage to be more sensitive to spatial resolution and other effects than to ocean temperature relaxation.
The computational applications described here require large-scale massively parallel machines with thousands of processors, thus the DoD's High Performance Computing Modernization Program (HPCMP; www.hpcmo.hpc.mil) provided the computers on which these researchers ran their applications. Each year, the program purchases two to four new systems for two of its centers, so that each center gets a new machine every second year. The HPCMP has also developed a new method for guiding the procurement process.
A particular application's performance can vary by as much as a factor of 10 for different computers, as can the performance of different codes for a particular computer. Larry Davis and his colleagues describe a methodology for ensuring that DoD users get the best mix of computers for their needs. Their procurement methodology relies on measuring the relative performance for candidate computer systems by using a suite of approximately 10 applications that represent the types of codes used on DoD computers. Then, they combine each proposed system's price-per-performance data with the performance of currently installed computers to identify the best mix of new machines and old. Finally, they combine this data with physical infrastructure requirements, historical measures of vendor support, reliability, and maintenance to identify the best value.
The use of computational science and engineering continues to grow in the DoD as weapons systems become more complex and place greater reliance on "not completely mature" technologies. Such systems require more testing at a time when the department faces intense pressure to reduce testing costs and schedules. This has led the DoD to launch the ambitious 12-year Computational Research and Engineering Acquisition Tools and Environments (CREATE) program this year. CREATE will develop and deploy three large-scale computational engineering toolkits for designing aircraft, ships, and RF antenna integration. The goal is to provide DoD engineers with ways to detect and fix design flaws early in the design process before major commitments are scheduled and budgets made. Computational science and engineering is also playing a larger role in optimizing the effectiveness of test programs by allowing the identification of the parameter ranges in which the system is most vulnerable to failure and by increasing the amount and quality of test data that can be collected and analyzed. The DoD HPCMP holds an annual user group conference in mid June to present its latest accomplishments ( www.hpcmo.hpc.mil). The articles featured in this issue represent a sample of the papers presented at the 2005 conference.