The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - July/August (2004 vol.24)
pp: 5-6
Published by the IEEE Computer Society
How important is the processor core these days? This is a question on the minds of microarchitects, designers, and researchers in our field. And, it is a question (or is it a concern) that is not that new, either. It is at least as old as the days when we started grappling with issues such as the memory wall problem. Processor-level architects have long used (or goteen used to) the admonition, "It's the memory, stupid!"
High-performance, server-class microprocessors have witnessed a steady decline in the die area percentage devoted to the actual CPU. Most of the transistor growth with technology has been in boosting on-chip storage in the form of caches. Let's face it, the processor core in a general-purpose, high-performance chip is fast shrinking to apparent insignificance. Even with the new trend of multicore microarchitectures, the total noncore area comprising storage and interconnect far dominates the overall chip area to meet chip- and system-level performance targets. In a sense, this is old news in the world of embedded systems, where the system-on-chip (SoC) design paradigm already made the processor core a crucial but often rather tiny piece of the overall chip or system.
So, do server-class, high-end processor chip designs have something to learn from current-generation embedded core and system designs? Are some aspects of the problems and corresponding solution approaches converging in these two worlds? Do the economics of chip manufacturing dictate a trend where "commodity" core designs might soon be used in SoC-style designs in both low-end embedded and high-end server systems?
It is difficult to answer these questions without a thorough analysis of cost-performance trends involving parameters as diverse as power consumption, performance, yield, verification cost, testability, manufacturability, and so on, coupled with available knowledge about the trends in semiconductor technology. However, one thing is clear: The embedded-processor microarchitectures had to invest early in power-saving features because of energy cost constraints (in other words, battery life). Features such as clock-gating, banked caches with gateable regions, cache set prediction, code compression to save area, dynamic voltage scaling, and static sleep (power gated) modes are all old hat in the world of embedded-processor systems.
Going Lean
Thus, server-class processors, newly hit by the power wall, have a lot to catch up on in terms of integrating on-chip power management logic into current design styles and methodologies. Power (and especially leakage power) and verification complexity constraints are forcing high-end core designs to become leaner. The industry trend appears to be toward cutting back on core-level complexity, while scaling up the chip-level architecture through the use of multiple cores. Each core must meet single-thread performance, while working cooperatively and efficiently with the other cores (sharing storage and interconnect resources), such that chip- and system-level performance is high enough to meet overall system performance targets are met as well. At the same time, chip-level power must remain below the target dictated by the package or cooling cost limits.
Learning From Embedded Systems
Integrated solutions, involving power awareness at all levels (from the application, OS, and compiler level, down to memory and processor hardware resources), is another trend that is the outcome of chip-level power and temperature limits. Again, such hardware-software codesign approaches are far from new in the world of embedded systems. Hence, high-end chip and system design teams stand to gain a lot by examining designs at the low-end, embedded-systems arena. This is an interesting new turn of events, brought on by current technological trends that dictate a rather rapid power consumption increase over the next decade. Historically, it has been the embedded-processor designers who have had to evolve their cores toward more and more complex microarchitectures to meet increased system-level, application-driven performance demands. Typically, high-end features such as superscalar execution, out-of-order processing and simultaneous multithreading have crept into the embedded-systems space much more gradually, and well after these mechanisms have been used in high-performance, general-purpose processors. Yet, at this time, power (the great equalizer), seems to be driving a certain convergence in the microarchitecture-level definition of the processor core, the ubiquitous building block used to build systems of all kinds, embedded or not!
It remains to be seen whether core designs used in embedded and server systems do indeed converge and stabilize to a somewhat common, small set of alternative microarchitectures. Like motors embedded in diverse electrical appliances (large and small), will processor cores one day become commodity components lost in a diverse, but intricately interconnected world of general-purpose and application-specific systems (large and small)? Or, will there always be a reason, and substantial market, for designing the high-performance processor cores and chips needed to build large, power-hungry, expensive, highly reliable mainframes and server systems? Perhaps; only time can tell.
I hope you enjoy this issue on embedded systems, guest edited by Alessio Bechini, Thomas M. Conte, and Cosimo Antonio Prete. Design and research trends available from the articles in this issue will doubtless aid in fostering further thought about the potential convergence issues I've raised here.
27 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool