Issue No. 02 - February (2005 vol. 38)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MC.2005.67
Yervant Zorian , Virage Logic Corp.
The silicon-scaling revolution is quite real and persistent. As we move to each new technology node, we attain a 50 percent area reduction and a 30 percent performance increase. The continuous scaling presents a plethora of design challenges as we progress into the nanoscale era, which imposes the need for additional design processes, such as design for manufacturability and power management, and introduces much higher mask and tooling costs.
Moreover, the bizarre vagaries of nanoscale technologies put a heavy burden on the test community, as scaling beyond 90 nanometers greatly extends process complexity and exacerbates leakage faults and soft errors. At the same time, process variants include new dielectrics, multiple oxide and metal layers, multiple voltage thresholds, and smaller noise margins, so that a product engineer faces serious yield implications.
The increasing number of available transistors is leading designers to incorporate even more on-chip functionality in the form of large embedded memories, base I/Os, and a variety of signal and protocol processing blocks. This is far outpacing designer productivity and is greatly increasing design complexity. Finally, with fabs expected to cost on the order of $3.5 billion and with skyrocketing reticle costs, successful companies must ship in high volumes with increased yields and amortize their design costs over multiple product lines by adopting, integrating, and reusing silicon-aware intellectual property blocks from qualified vendors.
To this end, the design community is working together to further design automation and improve design flows—be it silicon-aware IP design and delivery or hardware and software automation that lets designers work with higher-level languages and abstractions that hide the underlying process complexities and allow performance, power, and area optimization at every level. Similarly, the test community is looking beyond bolt-on test approaches to solutions such as infrastructure IP for embedded test, diagnosis, and repair. To maximize manufacturing yield, the infrastructure IP functions must be optimally tuned to the design under test.
Design and Test Community
To face these challenges, the design and test community has organized itself into several professional and business-oriented organizations. As the " Test Technology Technical Council" sidebar describes, the TTTC, a professional organization sponsored by the IEEE Computer Society, serves the worldwide test community with a wide range of activities, including educational programs, conferences, workshops, and standards.
The EDA Consortium is a business-oriented organization that represents 100 electronic design automation companies. The consortium seeks to identify and address issues that are common among these companies and the customer community they serve. By focusing on commonality and promoting cooperation, the consortium augments the effectiveness of design automation tools and services.
Established in 1994, the Fabless Semiconductor Association serves the design and test community by supporting the ongoing, symbiotic relationship between fabless semiconductor companies and their suppliers, including semiconductor foundries, IP providers, electronic design automation vendors, and design service houses. The FSA facilitates productive business partnerships, disseminates relevant data, and promotes the growth of the fabless business model.
These vibrant organizations are working together to address the many challenges that face the industry as we move to 90 nanometers and below. We are proud to be associated with Computer's Nanoscale Design & Test issue showcasing some of the exciting ideas that keep our industry ahead of Moore's law.
In This Issue
This issue features three articles describing various advanced aspects of design and test.
In "Robust System Design with Built-in Soft-Error Resilience," Subhasish Mitra and colleagues address the increasingly prevalent problem of soft errors or single-event upsets. Transient errors caused by terrestrial radiation pose a major barrier to robust system design, especially as chip sizes shrink and system susceptibility to error increases. The authors describe a number of soft-error protection techniques, including a strategy for using on-chip scan design-for-testability resources for soft-error protection during normal operation.
"Transistor-Level Optimization of Digital Designs with Flex Cells" by Rob Roy and colleagues explores another extremely important subject: the increasing need to reuse IP in today's chip designs. The use of precharacterized and silicon-verified standard cells is driven by the need to create and verify large digital circuits without having to verify the circuit's behavior at the transistor level, which is simply too resource intensive to be commercially viable for most designs. On the other hand, the quality of such automated standard-cell-based designs has been poor at best, running slower by a factor of 6 and consuming more area by a factor of 10. The quest to overcome these limitations leads naturally to the creation of new design- and context-specific cells—designated flex cells—during the process of optimizing a given digital design. Designers then use these cells via a combination of register-transfer-level coding style and synthesis directives.
Finally, the technical evolution we are witnessing today—particularly shrinking geometries—is enabling the integration of complex platforms in a single system on chip, and SoCs with more than 100 processors could become commonplace. Compared with conventional ASIC design, such a multiprocessor SoC requires a fundamental change in chip design. In "Hardware/Software Interface Codesign for Embedded Systems," Ahmed A. Jerraya and Wayne Wolf propose an interface-based HW/SW codesign methodology that takes advantage of IP blocks. Working at higher levels of abstraction, the productivity of a designer who can generate only 100 lines of Hardware Description Language code per day is higher if those lines represent large blocks rather than logic gates. The ultimate goal of this methodology is to design both hardware and software at all abstraction levels.
These three articles address only a limited subset of the challenges facing the design and test community. The community regularly conducts conferences, workshops, symposia, and forums offering opportunities to explore potential solutions to these challenges. As the accompanying sidebars describe, key examples of these opportunities include conferences like Design, Automation, and Test in Europe (DATE), cosponsored by the TTTC and EDAC; the International Test Conference (ITC) and VLSI Test Symposium (VTS), both cosponsored by the TTTC; publications such as IEEE Design & Test of Computers; and numerous standards, such as IEEE P1500. You are encouraged to further explore the exciting challenges in nanoscale design and test by participating in these events or by subscribing to D&T.
Yervant Zorian is vice president and chief scientist of Virage Logic. He received an MSc in computer engineering from the University of Southern California and a PhD in electrical engineering from McGill University. Zorian also received an executive MBA from the Wharton School of the University of Pennsylvania. He is an IEEE Fellow, serves as IEEE Computer Society Vice President for Conferences & Tutorials, and is the editor in chief emeritus of IEEE Design & Test of Computers. Contact him at email@example.com.