, Editor in Chief
Pages: p. 81
In 1987, Tsugio Makimoto described the semiconductor industry swings between standardization and customization every decade. According to Makimoto, the pendulum has currently swung to standardized parts, most prominently FPGA devices. This emphasis should last through 2007 when system designers will once again feel the pressure to differentiate products with custom chip designs. With advances in their capacity and performance, FPGAs have certainly emerged as a volume leader. Typically, these parts are also among the first ported to high-performance semiconductor processes. Although initially a replacement for board-level logic devices, FPGAs have now captured the imagination of several diverse user communities, including computer architects looking to break out of von-Neumann-computing bottlenecks. Indeed, when allowed by the application in question, spatially spreading computing tasks has several advantages because of parallel processing. Designers have exploited spatial parallelism in building custom computing machines for tasks from fingerprint recognition to image processing and bioinformatics. Reconfigurable architectures have evolved into various configurable computing platforms, ranging from loosely coupled coprocessing engines to tightly coupled configurable data paths.
This burst of creativity in exploiting FPGAs is indicative of the computing community's aggressive search for the most efficient platform (in terms of cost, energy, and design time) for future systems. Even though computing is not yet the major consumer of FPGAs, the FPGA vendors have tried to tap into the appetite of the mainstream (and even high-performance) computing market by providing attractive features such as high-speed transceiver links and sophisticated memory controllers. In some cases, these links directly support mainstream processor interfaces (such as HyperTransport or PCI Express). The jury is out on how the mainstream architectures will evolve, particularly given the trends toward multiple cores (a topic that D&T plans to cover in coming months), and whether or not FPGAs will have a role in the new platforms. However, it is certain that FPGA vendors are not staying still and are taking advantage of their increasing transistor budgets to put one or more processor cores on-chip with support for advanced I/O and memory interfaces.
Responding to these trends, the guest editors, P.A. Subrahmanyam and Patrick Lysaght, have put together a special issue of D&T. The special issue's three contributions seek to address the key questions in architectural design for high-performance computing, architectural exploration for making the right trade-offs in hardware and software, and the effective integration of these parts in an overall computing system.
Aside from the special issue, our nontheme offering appeals to a diverse group of design and test professionals, including articles on advances in ATE frameworks, BIST for detection of crosstalk faults in the boundary scan environment, an extension of a processor instruction set for multimedia applications, and a battery emulator for embedded systems.
D&T interviews remain one of our most popular departments. Ably led by Ken Wagner, this department has included interviews with several individuals who have had a significant impact on design and test, including Gordon Moore, Gene Amdahl, Robin Saxby, and Michael Hackworth. This time, Ken interviews Patrick Gelsinger, who was Intel's CTO and is now senior vice president of the Digital Enterprise Group. Pat shares his perspective, both technical and personal, about the semiconductor industry, and his work and life. D&T is glad to bring you an up-close discussion with such a phenomenal individual. In a later issue, Ken interviews a technical visionary, Irwin Jacobs, president and CEO of Qualcomm. We hope you enjoy the issue!
Rajesh Gupta, Editor in Chief