Pages: p. 4
Yolanda Gil, Ewa Deelman, Mark Ellisman, Thomas Fahringer, Geoffrey Fox, Dennis Gannon, Carole Goble, Miron Livny, Luc Moreau, and Jim Myers
Workflows have recently emerged as a paradigm for representing and managing complex distributed scientific computations, accelerating the pace of scientific progress. Scientific workflows orchestrate the data flow across the individual data transformations and analysis steps, as well as the mechanisms to execute them in a distributed environment. Workflows should thus become first-class entities in the cyberinfrastructure architecture.
Each step in a workflow specifies a process or computation to be executed—a software program or Web service, for instance. The workflow links the steps according to the data flow and dependencies among them. The representation of these computational workflows contains many details required to carry out each analysis step, including the use of specific execution and storage resources in distributed environments.
Luiz André Barroso and Urs Hölzle
Energy management has now become a key issue for servers and data center operations, focusing on the reduction of all energy-related costs, including capital, operating expenses, and environmental impacts. Many energy-saving techniques developed for mobile devices became natural candidates for tackling this new problem space. Although servers clearly provide many parallels to the mobile space, they require additional energy-efficiency innovations. Energy-proportional computers would enable such savings, potentially doubling the efficiency of a typical server.
In current servers, the lowest energy-efficiency region corresponds to their most common operating mode. Addressing this mismatch will require significant rethinking of components and systems. To that end, energy proportionality should become a primary design goal. Although researchers' experience in the server space motivates these observations, energy-proportional computing also will significantly benefit other types of computing devices.
Suzanne Rivoire, Mehul A. Shah, Parthasarathy Ranganathan, Christos Kozyrakis, and Justin Meza
In recent years, server and data center power consumption has become a major concern, directly affecting a data center's electricity costs and requiring the purchase and operation of cooling equipment, which can consume from one-half to one watt for every watt of server power consumption.
All these power-related costs can potentially exceed the cost of purchasing hardware. Moreover, the environmental impact of data center power consumption is receiving increasing attention, as is the effect of escalating power densities on the ability to pack machines into a data center.
The two major and complementary ways to approach this problem involve building energy efficiency into the initial design of components and systems, and adaptively managing the power consumption of systems or groups of systems in response to changing conditions in the workload or environment.
Wu-chun Feng and Kirk W. Cameron
Despite a 10,000-fold increase since 1992 in supercomputers' performance when running parallel scientific applications, performance per watt has only improved 300-fold and performance per square foot only 65-fold, forcing researchers to design and construct new machine rooms and, in some cases, entirely new buildings. Compute nodes' exponentially increasing power requirements are a primary driver behind this less efficient use of power and space.
Today, several of the most powerful supercomputers on the TOP500 List each require up to 10 megawatts of peak power—enough to sustain a city of 40,000. To inspire more efficient conservation efforts, the HPC community needs a Green500 List to rank supercomputers on speed and power requirements and to supplement the TOP500 List.
John Y. Oliver, Rajeevan Amirtharajah, Venkatesh Akella, Roland Geyer, and Frederic T. Chong
Many consumer electronic devices, from computers to set-top boxes to cell phones, require sophisticated semiconductors such as CPUs and memory chips. The economic and environmental costs of producing these processors for new and continually upgraded devices are enormous. Because the semiconductor manufacturing process uses highly purified silicon, the energy required is quite high—about 41 megajoules (MJ) for a 1.2 cm 2 dynamic random access memory (DRAM) die. In terms of environmental impact, 72 grams of toxic chemicals are used to create such a die.
Processor reuse can help deal with these increasingly severe economic and environmental costs, but it will require innovative techniques in reconfigurable computing and hardware-software codesign as well as governmental policies that encourage silicon reuse.