Pages: pp. 4
Paul S. Rosenbloom
Traditionally, computing studies occupy two partitions—science and engineering—separated by a line roughly at the computer architecture level. A more effective organization for computer science and engineering requires an intrinsically interdisciplinary framework that combines academic and systems-oriented computing perspectives.
Researchers at the University of Southern California have been developing such a framework, which reaggregates computer science and computer engineering, then repartitions the resulting single field into analysis and synthesis components. The framework is based on the notion that science is foremost about dissecting and understanding, and engineering is mostly about envisioning and building.
The job prospects of college graduates with degrees in computing and information technology have recently dimmed significantly. Much of the current situation can be attributed to the dot-com bust, the generally weak US economy, and a growing trend toward offshore outsourcing of IT-related jobs.
The author describes an interdisciplinary program developed at Louisiana Tech University that merged the College of Engineering with the College of Science to form the College of Engineering and Science and established an interdisciplinary PhD program in computational analysis and modeling along with a small number of interdisciplinary research centers. He explains how establishing such a program could be beneficial to smaller schools where graduates face limited job opportunities and where pooling the talents of interdisciplinary teams can help them compete for national funding in focused research areas.
Mahesh Kallahalla, Mustafa Uysal, Ram Swaminathan, David E. Lowell, Mike Wray, Tom Christian, Nigel Edwards, Chris I. Dalton, and Frederic Gittler
Utility computing aggregates disparate systems into a single, centrally managed pool of resources that offers unified control, freedom from physical configuration, resource sharing, and resource isolation.
To provide these features, the authors propose a software-based utility data center that virtualizes server, network, and storage resources. SoftUDC's main underlying characteristic is careful virtualization of servers, networking, and storage. With SoftUDC, administrators can deploy applications and modify their environment without physically rewiring servers, which facilitates sharing of physical resources while maintaining full isolation.
Greg Regnier, Srihari Makineni, Ramesh Illikkal, Ravi Iyer, Dave Minturn, Ram Huggahalli, Don Newell, Linda Cline, and Annie Foong
While research in TCP/IP processing has been under way for several decades, the increasing networking needs of server workloads and evolving server architectures point to the need to explore TCP/IP acceleration opportunities. Researchers at Intel Labs are experimenting with mechanisms that address system and memory stall time overheads. They are also studying the effects of interrupt and connection level affinity on TCP/IP processing performance.
Further, they have begun exploring mechanisms to support latency-critical TCP/IP usage models such as storage over IP and clustered systems. The goal is to identify the right level of hardware support for communication on future CMP processors and server platforms.
George Candea, Aaron B. Brown, Armando Fox, and David Patterson
The Recovery-Oriented Computing project studied techniques to help systems quickly recover from inevitable failures. ROC research focuses mainly on Internet services because they can grow to immense proportions, are subject to perpetual evolution, have varying workloads, and are expected to run 24/7.
The project has implemented two building blocks for recovery: microreboot and system-level undo. These researchers believe that most of what we have learned from Internet services can also be applied to desktops, smaller network services, and other computing environments.
Ricardo Bianchini and Ram Rajamony
Data centers typically host clusters of hundreds and sometimes thousands of servers. Power and energy consumption have thus become key concerns in data centers.
Realizing the differences between portable and server-class workloads and operating environments, researchers have developed server-specific management strategies. Despite these efforts, much must still be done. In addition to conserving power and energy in heterogeneous server clusters composed of a combination of traditional and blade servers, the authors and their colleagues also seek to enforce limits on the power each server consumes.