The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2007 vol.27)
pp: 4-5
Published by the IEEE Computer Society
David H. Albonesi , Cornell University
ABSTRACT
While leading computing corporations have instituted "green data center" and "eco-responsible computing" initiatives, the computer architecture community as a whole has drifted away from power-aware architecture and on to the next topic. Arguably, power remains the computer architecture topic with the most potential for societal impact. Albonesi exhorts <it>Micro</it> readers to re-emphasize power-related research and outlines a few of the most pressing issues.
In my last column, I reflected on the decline in power-aware computer architecture papers in two of the premier conferences: the International Symposium on Computer Architecture (ISCA) and the International Symposium on Microarchitecture (MICRO). Does the move to multicore microprocessors composed of simpler cores of modest frequency mean that power has become a dead topic? I posed this admittedly loaded question to a number of leading industry architects—Pradip Bose of IBM, Luiz Barroso of Google, Justin Rattner of Intel, and Chuck Moore of AMD—and the answer was a resounding "No." Power is just as important today as five years ago.
But the computing landscape and power constraints have changed. The processor core and caches are in better shape powerwise than on- and off-chip interconnects, memories, disks, and networking hardware. Indeed, the lion's share of the last decade's research effort in power-aware computing involved the processor core and caches, and today's systems clearly demonstrate the results. Google's Barroso notes that in a typical server, the processor power fluctuates with load due to power management techniques, but memories, disks, and network hardware don't vary as much, expending a large percentage of peak power even when they are not heavily utilized.
So even if our collective enthusiasm for power-aware computing research seems to have waned, our job is very much unfinished. Our best efforts to date have been easily exceeded by the rapid pace of information dissemination to the world via the Internet. A report this year by Lawrence Berkeley National Laboratory ( http://enterprise.amd.com/Downloads/svrpwrusecompletefinal.pdf) that attempts to more carefully quantify worldwide electricity use associated with servers estimates that those costs doubled from 2000 to 2005 and will continue to rise rapidly. Although server electricity usage (including cooling and auxiliary equipment) constituted slightly less than 1 percent of total worldwide electricity sales in 2005, that still amounts to 14 1,000-megawatt power plants. And don't forget the massive client side of the power problem-the desktops, laptops, and PDAs playing those downloaded videos, songs, and games (and used for productive purposes, of course). How many of you reading this article now have several computers, just in your home? How many information appliances get recharged in your household every night?
While leading computing corporations recognize the power problem and have instituted "green data center" and "eco-responsible computing" initiatives (for an example, point your browser to http://climatesaverscomputing.org), I still believe that the computer architecture community as a whole-me included-has drifted away from power and on to the next topic. Power seems to have lost much of its appeal as a research topic, yet we are in a catch-up situation with respect to the amount of energy usage that our industry success is creating. Arguably, power remains the computer architecture topic with the most potential for societal impact, yet it now seems passé.
Arguments of environmental burden aside, the aforementioned industry architects note that the power problem still keeps them awake at night simply in terms of shipping competitive products and running profitable businesses. Here are a few of the issues that they find most pressing:

    System-level power. As already mentioned, it's no longer the processor core and caches, it's the entire system—including interconnects, memory, disks, and networking hardware—that we need to worry about.

    Control stability and determinism. How do we prove the stability of our power-control techniques and verify that they provide a guarantee under all possible conditions? Many industry architects will shy away from complex control systems that might, just once, fail miserably.

    dl/dt noise. Noise margins are thinner than ever, and power-control techniques exacerbate simultaneous switching noise. The excellent work done thus far in this area is overshadowed by the extent of the problem that remains.

    Tools and analysis. Despite many notable efforts, there is an ongoing need for improving the efficacy of architecture-level power-modeling and analysis tools.

    Optimizing for more varied workloads. For instance, many important workloads include a significant system-level software component. Several academic groups have invested significant effort in developing full-system performance and power-modeling environments and have graciously made their tools widely available to the community.

I, for one, have rethought my own de-emphasis of power-related research within my group over the past few years. I'm sending out power-related grant proposals and trying to publish some more power papers this coming year. I challenge the computer architecture research community to return to the days where ISCA and MICRO ran multiple sessions on power. Although the emphasis of those power papers should be different from what it was a few years ago, the need for innovation in power-aware computing is clearer than ever.
Now a brief introduction to this issue of Micro: The idea of presenting tutorial articles came under prior Micro Editor in Chief Pradip Bose, and I think you'll agree that every Micro reader can gain valuable knowledge from the articles we've chosen.
The first tutorial article, by Harris and his colleagues, is on transactional memory, a truly hot topic of debate that promises to increase programmer productivity while maintaining high performance in large-scale multicore systems.
Next, we have an article by Loh, Xie, and Black that breaks down 3D die-stacking technology, a packaging approach being developed by all major microprocessor manufacturers, and discusses the implications for the computer architect.
As I noted earlier, power modeling remains a very important topic, and the following tutorial by Brooks and colleagues provides a thorough treatment of the subject.
The next two tutorials cover modern computer architecture analysis methods. Hoste and Eeckhout in their article discuss microarchitecture-independent workload characterization, an important technique for both design-time and runtime optimization. The subsequent article by Lee and Brooks describes particular statistical methods for improving simulation speed.
Finally, we close out the issue with a tutorial by GadelRab that describes 10-gigabit Ethernet technology and its challenges.
You'll learn a lot from these informative articles, so I invite you to start reading. I always welcome your feedback at albonesi@csl.cornell.edu.
38 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool