I recently read Thomas L. Friedman's book, The World Is Flat: A Brief History of the Twenty-first Century (Farrar, Straus and Giroux, 2005). Focused on economics, the book also includes interesting observations about the information technology revolution.
Friedman observes, for example, that data communications on portable and handheld devices will mark the beginning of the IT revolution. The foundation of this revolution, in addition to Internet and wireless communications, will be portable and handheld computing devices that will replace desktops, laptops, and cell phones, ultimately morphing into something entirely new. At the heart of these devices, single-chip computers will provide and integrate a diverse set of applications that use entirely new architectural, design, modeling, simulation, and evaluation techniques.
If these are to be considered embedded systems, we must find a definition for embedded other than "something a computer architect is not interested in."
Many people still think of embedded systems as computers interacting with noncomputer, physical systems such as automobiles and power plants. Yet the next-generation computers at the foundation of portable and handheld computing are no more embedded systems than they are single-threaded systems, and they are certainly not servers or other conventional parallel processors. We must determine the impetus and audience for researching these next-generation computers.
Friedman's book points out that developing nations, with no previous infrastructure, are often more adept at adopting newer technologies than nations encumbered by such an infrastructure. Research also has this characteristic.
I have been trying to find a way to work on the fundamental problems of next-generation computer design for the past several years—problems that challenge the conventional wisdom of the CAD community, with its deep roots in formal modeling and synthesis, and the computer architecture community, which seems willing to sacrifice anything but existing programming models. Identifying an interesting problem to work on, it turns out, can be more of a curse than a blessing.
I can best explain this by including some personal observations. My research community has been defined as "System-Level CAD," "Embedded Systems," and "Codesign," with seemingly no end to the new monikers that embrace all and convey little. Revisiting the name game in this community provides a good starting point for considering other problems.
A research community's signature conference defines it, for both practical and idealistic reasons. The practical reasons pertain to the reward structure for doing research. For both academics and those who still do research in industry, dollars, publications, and reputation measure productivity.
It isn't possible to evaluate good ideas, good works, and good people just by reading proposals or journal papers. At a good conference, coffee breaks—which result in informal conversations—can be as important as the presentation rooms. This is where researchers have an opportunity to distill and clarify their ideas—for themselves and for their audience. That audience could be a potential industrial partner, a future paper reviewer, or someone who might write a letter supporting anything from a promotion to a career-long award.
In a more idealistic sense, conferences offer the only forum that provides a shared identity and a true peer group experience for an otherwise distributed group of people who have a common, logical research focus. As with other kinds of peer groups, research groups provide a personal reason to continue doing good works long after a researcher has established a reputation: People do such works because they gain satisfaction from knowing that their peers appreciate their contribution.
CODES+ISSS is the conference that I, as a researcher, identify with most closely. The 2005 conference provided another opportunity for my research community to establish a new name for the work it does. This is a source of intrigue, because the conference continues growing in popularity even though its name derives from two research areas that no longer describe the work it embodies:
• CODES was the name of the International Conference on Hardware/Software Codesign, while
• ISSS was the name of the International Symposium on System Synthesis.
The older of the two, ISSS, was established as a place to publish work on behavioral—or system-level—synthesis. In those days, system synthesis meant computer hardware synthesis—synthesizing an application-specific integrated circuit from hardware description languages such as Verilog or VHDL.
ASICs offer performance advantages at the expense of post-design-time programmability. This contrasts with computer architecture, which focuses on support for a wide variety of end-use programming scenarios.
Hardware/software codesign grew out of system-level synthesis. When processors, and thus software, started to appear on single-chip computers that were previously designed only in hardware, a question emerged in the synthesis community: What should go in hardware and what in software? CODES addressed this partitioning problem for several years.
Currently neither hardware synthesis nor hardware/software codesign describe the system-level design represented in CODES+ISSS. Most design problems in CODES+ISSS assume the need for more than one piece of hardware and typically even more than one kind of processor.
Still, when the new CODES+ISSS conference formed, the founders could not agree on a new name. The words system and embedded kept appearing in various proposals. These are the least offensive words precisely because they say so little.
Without agreement, the old name, CODES+ISSS has remained. The name is more a moniker about people and their legacy than about research objectives and the future. During the 2005 conference, some speakers introduced a new name: Electronic System Level (ESL), which suggests research that requires no particular kind of expertise or skill.
Unfortunately, the implications of pursuing a research area that lacks a descriptive name are huge:
• Students apply for graduate positions to work in named areas.
• Faculty advertisements are often restricted to hiring those who work in named areas.
• Funding agencies, such as the National Science Foundation and the Semiconductor Research Corporation, solicit proposals for research done in named areas.
• In academia, contributions are evaluated by peers—researchers who pursue common interests. If the common interests cannot be named, contributions become more difficult to evaluate.
• If its objectives cannot be named, the research community has difficulty attracting new people interested in solving new, fresh, and exciting problems.
If I tell those outside the CODES+ISSS community that my research is in ESL design, they have no idea what I do. By contrast, those who work in networking, reconfigurable computing, computer architecture, computer security, or even artificial intelligence can quickly establish a first-order approximation of the work they do in one or two words. With these descriptive names, they categorize themselves by the area of expertise they bring to the table.
This experience forced me to consider why the CODES+ISSS community is simultaneously so interesting and frustrating.
Again, this can be seen more clearly by considering what's in a name. Consider the heavy reliance within the CODES+ISSS community on the use of the word system.
System design tends to bring to mind the pulling together of diverse, interacting elements to create something entirely new. The need for what the system does drives the process of creating it. In the case of computer systems, the need is the application. A system is defined more by what it does than what it is.
For this reason, those working in ESL design always search for the next hot application, be it network processing, multimedia, or sensor arrays. Notice the contrast to computer architecture, where existing benchmark suites define the evaluation space.
Computer system design is application-driven, and therein lies a problem. World-class researchers will more likely earn their reputation by focusing on what something is and generalizing what it can and cannot be. Those who choose to define themselves by pursuing an ever-expanding collection of interesting design situations are more likely to become known for being high-tech handymen: engineers in search of solutions rather than researchers in search of truths.
At the same time, however, being motivated by applications presents an opportunity. Communities flexible enough to chase the next hot application are well-poised to occupy the leading edge.
How then does a researcher interested in developing the fundamental work necessary for next-generation IT devices gain a reputation as a world-class researcher?
In contrast to CODES+ISSS, other research communities focus on investigating and extending a core set of techniques and organizing principles. These researchers have a specific expertise in a specific discipline, and they bring that expertise to bear on a core set of problems. At the same time, they seek to provide the foundation for generating something new—and that something new will in turn enable new applications to execute more quickly, occupy a smaller space, function more robustly, or provide greater scalability.
Given the challenges that next-generation single-chip computing poses, our field is not unlike computer architecture in the days before formal design resulted in the definition of finite state and Turing machines, before Amdahl's law, and well before the establishment of a common benchmark set and simulation platform. Back then, researchers seemed to know they needed a common basis for discussion more clearly than we do here on the IT revolution's frontier. As well, back then there was much less legacy work to build upon.
Somewhere between the inertia built up in computer architecture and the lack of focus in the CODES+ISSS or ESL community, lie both the need and the opportunity to provide fundamental research for a new class of computing problems. There is a fundamental set of interesting research problems out there, but no real forum to get the attention of research sponsors to define those problems nor for young researchers to be accepted for working on them.
I believe the best opportunity exists within the CODES+ISSS community, if that community will trade some of its flexibility for focus.
Virtually all the work represented in CODES+ISSS is either about systems that use multiple, heterogeneous design cores, or about the design of elements to be used in those systems.
I propose a new research community defined around the design of heterogeneous core computers. These HCCs will emphasize the creation and integration of multiple design cores and the programmable nature of the resulting computer. The result will be a community focused on integration as well as architecture, design as well as programming, and evaluation techniques as well as design tools.
Heterogeneity distinguishes portable and handheld computing from other single-chip multicore communities. It arises naturally from the need to place a set of applications upon a finite amount of space such as a chip. Given that the chip has finite real estate, homogeneity is wasteful. Further, we know enough about the applications at design time to favor differentiating portions of the chip at a fairly high level, well above the register level.
With this or even some other focus, our community can attract the best and the brightest. Who among them will be first to gain recognition for tackling the following fundamental questions:
• What are the design elements of HCCs?
• What is the HCC programming model?
• How many kinds of heterogeneous cores do we really need?
• How sensitive is overall HCC performance to a single core's performance difference?
• How do we schedule across multiple, heterogeneous cores?
• When is having the same task execute on multiple cores worthwhile?
• When and how is global coordination useful and when is it wasteful?
• Many HCCs will be multimodal; how does utilization factor into the benefits of specialization?
• When does using multiple cores and incurring their coordination cost work better than using a single, faster core that multiplexes many tasks?
These questions should be posed, pursued, and answered as part of a body of work a clearly identifiable research community performs, where peers can reward and evaluate fundamental works. Thus, we can take a big step toward providing a set of enabling solutions for the IT revolution by first redefining our community.
JoAnn M. Paul
is an associate professor in the Electrical & Computer Engineering Department at Virginia Tech. Contact her at email@example.com.