NOVEMBER-DECEMBER 2000 (Vol. 20, No. 6) pp. 10-11
0272-1732/00/$31.00 © 2000 IEEE
Published by the IEEE Computer Society
Published by the IEEE Computer Society
Guest Editor's Introduction: Stepping Into the Future
PDFs Require Adobe Acrobat
The 21st century is finally here. When I was a child, it seemed too far away to consider, but here we are.
We don't have the future depicted in the old sci-fi novels. On the other hand, the roomful of devices that comprised a computer born in the middle of the 20th century has been integrated into semiconductor chips in the last thirty years and now are embedded in many instruments and appliances we use daily. The number of available embedded processors has increased quite rapidly, and many are connected to the Internet. In this sense, we have already stepped into the future.
This issue is part three of IEEE Micro's special series on the microprocessors of the 21st century. Part one featured contributions from Japanese and European companies on their future embedded processors. Part two brought us glimpses of the next-generation 64-bit processors from Intel, the Itaniums. In this third and last installment, other US vendors discuss their future microprocessors.
Mark Tremblay and others from Sun Microsystems contributed an article on the MAJC processor. MAJC tries to achieve parallelism at various execution levels. Its VLIW architecture is accompanied by SIMD instructions. This processor supports multithreaded programs, and speeds up loops and function calls of single-thread programs by means of speculative multithreading. MAJC also supports high-speed interprocessor communication among the on-chip multiprocessors.
The demand for low-power processors in mobile computing devices has now taken root in commercial general-purpose processors. This contrasts with the demand for high performance on desktop systems. David Brooks and others from IBM offer a tutorial on a microarchitecture that consumes a very small amount of electric power. They discuss the development of a simulator to predict the final power consumption based on the microarchitecture-level modeling. It arrives with a certain trade-off relationship between power consumption and performance, and predicts the future potential of low-power microarchitecture.
Chris Herring from National Semiconductor expands on the microprocessor and microcontroller trends based on his experience in the market and the application of these devices.
Gene Franz from Texas Instruments complements Herring's remarks in his article on future digital signal processor trends. Franz discusses the DSP's performance trend, parallelism, semiconductor process, power consumption, and mixed use of analog and digital design. He also surveys the new DSP applications.
To round out the issue, Intel authors continue the discussion of the IA-64 architecture and Itanium processors.
This issue presents contributions from the vendors who make the processors used in the real world. In each generation, we see demands for more power from processors. Even today, the servers that drive the Internet, so to speak, need more power for data mining and the like; the embedded processors for network instruments also require higher performance. Intranet and information appliances, PDAs, and portable telephones all require the same performance level while calling for the reduction of consumed power.
Looking back at the history of microprocessors, we can view the 1970s as the microcomputer's childhood. The first half of the 1980s gave us 32-bit microprocessors, and the latter half of the 1980s saw the emergence of RISC with its promise of simple hardware and resulting higher performance. However, RISCs couldn't surpass CISCs, as exemplified by the x86 architecture's success in the market. In the 1990s, hardware mechanisms reserved previously for expensive systems have been used in RISC and CISC microprocessors: superscalar pipeline, branch prediction, out-of-order execution, speculative execution, and so on. CPU hardware has become very complex.
Now we see the emergence of VLIW architecture in which software more or less handles the instruction scheduling at compilation time. This VLIW architecture is now used for server CPUs as well as for an embedded CPU that emulates the x86 architecture.
VLIW's market success as a general-purpose CPU is not quite clear yet, but the current trend to tackle hardware complexity seems to be the one that simplifies hardware and relies on software for scheduling and other housekeeping chores. Some processors have microarchitectures that are reconfigurable under software control.
Developing special-purpose processors isn't impossible—it's easier than before due to advanced electronic computer-aided design and design automation packages. But in comparison to the generally available off-the-shelf CPU, such specialized processors aren't cost-competitive; and unless there's a clear niche market for such efforts, no new development can take place.
Today's supercomputers are built using ever-speedier off-the-shelf CPUs. Market competition has driven the speed of these commercial CPUs to a new level.
Looking outside the general CPU market, we see that DSPs are firmly established in designers' minds. Specialized processors for a profitable niche market such as multimedia processing were in great demand five years ago, as is the network processor for network devices today. If an application in a large market needs a performance boost, it's quite likely that processors meeting these specific requirements will be developed.
From the application developer's viewpoint, the architecture of the processor isn't important per se. Developers will want total system integration: a system on a chip (SOC) that includes a processor core, peripheral functions, analog circuits, and other necessary device functions fitting into one small package. These SOCs will help build future embedded devices. We'll see the day when small quantities of such SOCs will be designed and fabricated on demand after the developer sends in the design data.
The software aspect of the whole business scene can't be ignored. Depending on the software programs it supports, a microprocessor can gain market share. Even if a CPU with low production cost and high performance appears, it won't succeed if it needs a general-purpose operating system. If the target application requires a program that is similar to the one available on an x86, the chance for success is still smaller, since many users ask, "Why not use the x86 to begin with?" (There's a chance for success if a specialized application in a niche market executes efficiently on the new CPU.)
The technology to emulate the instruction set of other CPUs has improved, but still we can't beat the speed of the native execution. Java, touted with the slogan, "Write Once, Run Everywhere," still has a distance to go in reaching that nirvana. Entering the 21st century, we still rely on the software resources of the 20th century, and the impact of the software inherited from the past may be larger than we imagine.
We hope that the three installments of this special series have given you food for thought about the microprocessors of the 21st century.
Ken Sakamura, IEEE Micro's Editor-in-Chief, is a professor in the Interfaculty Initiative in Information Studies at the University of Tokyo. He is currently constructing the Digital Museum, which uses various computer technologies. His primary interests lie in computer architecture and digital museums as well as real-time processing and computer-augmented environments. He initiated the TRON project in 1984 to help build computers in the 1990s and beyond. Under his leadership, over 100 manufacturers participate in the project. Since he is now also interested in how computer use will change society in the 21st century, his design activities extend to electronic appliances, furniture, houses, buildings, and urban planning. Sakamura received BS, ME, and PhD degrees in electrical engineering from Keio University in Yokohama. He is a senior member of the IEEE and a member of the ACM.