The Community for Technology Leaders

Guest Editors' Introduction: DFM Drives Changes in Design Flow

Juan-Antonio Carballo, IBM Corp.
Yervant Zorian, Virage Logic

Pages: pp. 200-205


Design for manufacturability (DFM) has thus far been the focus of extensive study in the semiconductor industry. Although deep-submicron processes enable the manufacture of area-efficient, high-performance chips, navigating the nanometer landscape presents enormous manufacturability challenges. Basically, nanometer technology results in reduced process yield, reliability, and test quality, deeply affecting time to profit. These drivers are forcing designers to change traditional design flows. At the same time, time-to-market pressures are forcing companies to move into volume production before defect densities reach an acceptable level.

Because of these trends, a company's ability to achieve silicon success at advanced geometries will depend on how quickly it can arrive at working silicon, ensure high yield, and start volume production. This requires the ability to solve most yield-limiting challenges in the presilicon stage or to test, diagnose, and make repairs in a short time. And when you consider that designing a 90-nm chip can cost $25 million or more for nonrecurring engineering costs, designing with silicon-aware blocks and augmenting them with effective manufacturability features becomes crucial.

Optimum yield and reliability at 130-nm technologies and below has become especially challenging with the growing use of complex SoC designs and corresponding IP blocks. To help address this situation, new classes of DFM methods, tools, and IP have emerged. DFM is a set of technologies aimed at improving yield by enhancing communication across the design-manufacturing interface. It includes design techniques and IP that make a product cheaper to produce while maintaining its required quality and/or value. The most commonly used DFM techniques are based on judiciously including manufacturability criteria in the design flow, and can dramatically impact the business performance of chip manufacturers. These techniques can also significantly alter age-old chip design flows.

Major thrusts in DFM

DFM also includes a new class of IP called infrastructure IP. Designers can embed this type of IP into their designs; its sole purpose is to support the SoC's health by performing test, diagnosis, repair, and correction for yield and reliability optimization.

Because of the increasing importance of DFM in complex SoCs, we are dedicating most of this issue of IEEE Design & Test to this topic area. Despite its focus on yield, DFM really a much broader topic, encompassing several areas:

  • Design for manufacturable patterns. These techniques include novel design flows based on advanced resolution enhancement techniques (RETs); architecture, logic, circuit, and layout optimization for future lithography nodes; and comparisons between design-rule- versus tool-based DFM methodologies.
  • New manufacturability-oriented design blocks. These advanced forms of infrastructure IP are for manufacturability issue detection, analysis, and correction; DFM via adaptive circuits, logic, and architectures; and defect-tolerant designs.
  • Improved interaction at the design-manufacturing interface. At the interface between the two, new data preparation flows address the data size explosion problem. Design techniques for reducing mask-related costs are also important.
  • Design for yield enhancement. Chips now incorporate embedded diagnosis and debug functions, and built-in process monitors and other features to perform embedded measurement. They are capable of repair analysis and self-reconfiguration, and can supply data to support statistical design, power and performance analysis, and optimization. Other techniques include variability-aware design and behavioral or logic synthesis for manufacturability.
  • Domain-specific DFM. Specialized DFM covers analog and mixed-signal circuits, and 3D designs. One particular technique, manufacturable power grids, is useful for adapting the power delivery system to manufacturing-related performance unpredictability.
  • Test-oriented DFM. This class of DFM includes techniques for manufacturability improvement via test and DFT, including test-based diagnosis, defect-based testing, failure analysis, and test-based yield learning.

Key interfaces

These techniques can involve multiple industry players and can become ever more complex because they work with three key interfaces. The first is the customer interface; it corresponds to the output of a chip manufacturing facility that ends up at an original equipment manufacturer or systems house that creates a system with a set of chips. From the design standpoint, this interface is important because system design must account for the characteristics of the included chips.

Second, the supplier interface involves chip design and manufacturing houses that have a variety of suppliers. Chip design requires sophisticated EDA tools; in most cases, external suppliers provide and integrate these tools. Manufacturing and testing, on the other hand, require expensive equipment and carefully chosen materials; both come from specialized and sophisticated suppliers. So suppliers are a key interface in a company's ability to deliver high-yielding, high-performance chips.

The third interface, the design-manufacturing interface, is the focus of this issue. This interface is fundamental because it has a strong impact on overall long-term profitability, and thus on the return on investment for chip makers.

A simple yet commonly accepted definition for yield is the average percentage of manufactured chips that meet the design specifications. Based on this definition, profit per wafer depends on the number of chips per wafer, the price at which you sell each chip, the average yield, and the total cost per chip (including manufacturing, packaging, testing, distribution, and other overhead). Clearly, yield can have a very strong impact on profitability, especially when profit margins are small—which is increasingly the case because many markets are becoming price sensitive. Because manufacturers test chips at several value chain stages—wafer level (wafer probe), post-packaging test (final test), and so on—various definitions of yield are possible.

DFM enhances the communication bandwidth across the design-manufacturing interface. In most cases, designers accomplish this by judiciously including manufacturability criteria in the chip design flow. This issue includes three articles that correspond to this definition. The key item to remember is that DFM has a positive impact on the business performance of chip manufacturers.

Current DFM challenges

Unfortunately, current design techniques have several key issues. First, at the top levels of the hierarchy, it is very difficult to predict manufacturability. Second, library cell models are becoming more difficult to use as a single performance predictor, because manufacturing-related effects tend to be most concentrated within signal wiring and power distribution, which reside outside the standard cells. Third, conventional design rules cannot capture all the details of manufacturability information, such as information related to RETs. Device models change too often and are becoming very inaccurate, often remaining highly changeable until the chip is almost fully designed. To exacerbate this trend, re-spins have become more costly because mask costs are escalating, and the amount of mask-related data is exploding.

The use of infrastructure IP for manufacturability has certainly helped improve overall chip yield and accelerated time to volume while reducing test cost. Unfortunately, infrastructure IP does have limitations. The problem lies with the simple fact that designers optimize most infrastructure IP blocks individually, without any special knowledge of the independently developed physical IP provided by IP vendors. That is, designers optimizing the infrastructure IP know nothing about the final implementation of the physical IP, such as memory, logic, or analog, being targeted for design for manufacturability.

In effect, the infrastructure IP is "bolted on" to the physical IP as a means of improving the testability, diagnostics, and repairability of the physical IP cores. Today, because designers must locally optimize infrastructure IP solutions, they have no intimate knowledge of the physical IP's defect history or design, and no way to alter or optimize the infrastructure IP to account for it. The result, as you might expect, is suboptimal manufacturability. Defect densities are improving, but not fast enough to make a meaningful impact on desired yields. Because bolt-on infrastructure IP is not aware of the latest manufacturing problems, it fails to identify defective chips (test escapes) and ships them to users.

What's more, the fact that the infrastructure IP and physical IP are separate entities means that designers must individually manage them and separately integrate them into the design. This slows the design process and can negatively impact desired area, power, and speed, adding lengthy cycles in bringing the design up to working silicon.

Finally, using locally optimized infrastructure IP also compromises the chip's reliability, because SoCs that remain untested for certain errors are more likely to fail in the field. To help ensure optimal yield, acceptable reliability, and superior test quality, you must use an infrastructure IP designed with specific knowledge of the physical IP it supports, as well as the targeted manufacturing process. We call this type of semiconductor IP, featuring a highly tuned and integrated combination of physical and infrastructure IP, silicon aware.

For today's nanometer SoCs, it is an essential solution for optimum manufacturability and maximum yield. To ensure optimal test quality, the silicon-aware IP leverages the intimate details of the physical IP's design to provide a better environment for detection and diagnosis. For design productivity, silicon-aware IP delivers the benefit of being a single source of IP, so designers integrate the physical and infrastructure IP as a single entity produced by a single compiler.

New directions DFM solutions

Past DFM work has only partially solved these problems, so there remains much ongoing research in this area, and this special issue is a subset of that work. The five sidebars contained in this introduction show the vision of key individuals involved in the DFM domain, either as executives, technologists, or analysts. The four articles following this guest editorial go into further detail in explaining novel DFM approaches.

The article by Alfred K. Wong, "Some Thoughts on the Integrated Circuit Design-Manufacture Interface," provides a clear view of two complementary approaches to managing the increasingly complex design-manufacturing interface: restricted design rules and model-based design. The author's straightforward discussion is very timely, because the industry is moving toward a combination of both approaches.

The article by Jeng-Liang Tsai et al., "Yield-Driven, False-Path-Aware Clock Skew Scheduling," addresses causes of performance-related circuit yield loss, using clock skew scheduling as a tool. It is an interesting example of how managing circuit-level parameters directly impacts yield metrics—it offers a clear example of the direction in current DFM research.

The article by Jay Jahangiri and David Abercrombie, "Value-Added Defect Testing Techniques," is intriguing in that it describes advanced DFM-oriented test methods that target defect coverage, yield learning, and cost. The authors argue that testing is useful for more than filtering chips: It can directly help target test patterns, provide DFM tools, and reduce overall costs.

Greg Yeric et al., "Infrastructure for Successful BEOL Yield Ramp, Transfer to Manufacturing, and DFM Characterization at 65 nm and Below," provide an excellent description of DFM test structures intended as infrastructure IP for process monitoring. They also describe the systematic yield loss problem for certain category of blocks and then introduce a method for measuring the causes of yield loss.

In addition to these four DFM articles, this special issue also features a Perspective entitled, "New Test Paradigms for Yield and Manufacturability." Its author, Robert Madge, addresses a wide range of yield challenges and solutions based on test-oriented techniques.

Conclusion

Although brief, we believe this special issue is one more step in adding value to the critical field of DFM. We hope you enjoy it.

DFM: The New Mother of Invention

Raul CamposanoSynopsys

Traditionally, yield was the exclusive concern of fabs, with the design community giving it little attention. Today, it is crucial to attack the problems of slower fab ramps and plunging yields across the entire value chain: in the fab, during lithography, and during design.

During manufacture, engineers use metrology and test structures to monitor the process. They use this information to optimize the process window, in yield management systems, and to optimize libraries and memories for yield. Lately, engineers have also been using Technology CAD models to control advanced processes or to simulate statistical variations of electrical parameters as a function of process parameters. Design is receiving more process information, such as statistical models of electrical parameters, models of stressed silicon, the yield data for given structures, and reliability information.

Sub-resolution lithography requires resolution enhancement technologies (RETs), such as optical proximity correction, assist features, phase-shift masks, and specific illumination schemes. By considering design intent, such as for timing and power, designers can use RET to adjust the tolerances allowed for a given feature and optimize the number of shots for the mask writer. Designers then use lithography models to verify lithography compliance as early as possible in the design flow, for example, during design rule checking or routing.

From the design perspective, library design affects yield in the front end of the line. Routing determines the metal layers and vias, where techniques such as minimizing critical area, wire spreading, redundant vias, dummy metal fills, and avoiding low-contrast areas help to improve yield. Parametric yield losses are increasingly addressed by probabilistic methods such as statistical timing analysis and optimization. Finally, an integrated flow between design, test pattern generation, and fault diagnosis aids in the rapid segregation of yield-loss-prone design areas.

The key to yield-boosting DFM lies in increasing the amount of manufacturing data passed to design and in passing design intent to the mask synthesis tools and to process control. An effective approach must break down the barriers among process, lithography, and design. DFM is becoming serious business for the entire semiconductor industry: It has the potential to completely realign the entire semiconductor value chain.

Raul Camposano is senior vice president, chief technology officer, and general manager of the Silicon Engineering Group at Synopsys. Contact him at raul.camposano@synopsys.com.

The Future of DFM

Andrzej J. StrojwasPDF Solutions Inc.John K. KibarianPDF Solutions Inc.

Until the deep-submicron era, engineers defined the typical assurance of IC manufacturability using the worst-case Spice device files and layout design rules. However, with each new generation of process technology, design rules have become more complicated. It is now very common to create an additional set of DFM rules, which results in the explosion of the overall rule set and also significantly increases the overall design rule checking effort. The underlying assumption for each DFM rule is the probabilistic nature of the physical mechanism that causes the failure. In the nanometer era, the spectrum of physical phenomena that these rules must cover is mind-boggling. The systematic characterization of these phenomena and their impact on IC yield and performance is of crucial importance for true DFM.

The most accurate way of accounting for these effects is to provide accurate physical models and then simulate the actual IC layout to estimate the impact on yield and performance. Such a simulator requires calibration to the actual manufacturing process by specially designed test structures that cover all the possible layout patterns found in real products. To provide observability in the range of few failures per billion as a function of layout attributes, such test chips must contain many specially designed layout patterns that require the full reticle area. In return, the necessary DFM characterization is achievable with just a few wafers, which actually reduces the cost and (equally importantly) the turnaround time.

Such a modeling-based approach can then serve as the basis for generating guaranteed-to-yield IP blocks and as a yield sign-off tool for the entire IC layout. Moreover, you can also use such a simulation-based model to generate guidelines for the physical-design tools (for place and route) without creating any extra effort for IC designers or drastically changing the design flow. This will enable truly proactive DFM in which all the design modifications take place before verification and after tape-out.

In the sub-50 nanometer era, to guarantee manufacturability of gigascale ICs designed in these technologies, new approaches must be developed to maximize layout regularity. The extreme version of such regularity is therefore required where guaranteed to yield layout patterns are created in this DFM environment and layout synthesis becomes equivalent to pattern assembly.

Andrzej J. Strojwas is the chief technologist at PDF Solutions Inc. Contact him at ajs@pdf.com.
John K. Kibarian is president, CEO, and director of PDF Solutions Inc. Contact him at jkk@pdf.com.

The DFM Challenge

Dennis WassungAdams Harkness Inc.

DFM has been a hot topic in the semiconductor industry for some time. However, it is by no means well defined or easily understood. Broadly, DFM became a hot concept as new, leading-edge IC designs have become increasingly difficult to manufacture at acceptable yields.

Today, manufacturers face rapid technology change, increasingly complex SoC designs, a seemingly endless array of new semiconductor process materials, and IC feature sizes reaching into the nanotechnology realm. These factors only make the challenges more difficult.

At Adams Harkness, we see several DFM challenges. To start, it has been a challenge to define the term DFM, the products and technologies it encompasses, and the market's size. Some would say DFM is more hype than reality. More importantly, it has been a significant challenge to coax two historically separate groups—design and manufacturing—to work together effectively. However, through collaborative efforts among these groups, new tools are beginning to emerge, with hopes to overcome the growing technical challenges. I believe DFM is driving change.

To better understand DFM, I categorize the segment into three areas: process-aware IC design, resolution enhancement technology (RET), and process characterization and yield analysis. Although most EDA-based DFM solutions today consist of "fixing" a nonyielding design prior to entering manufacturing (essentially the postprocessing of layout data), we see a new set of emerging products targeting DFM by implementing process-aware technology early and throughout IC design.

Although this correct-by-construction approach is the ultimate goal, the largest segments within DFM today exist outside the core IC design flow. Post-layout RET operations such as optical proximity correction play a critical role in successfully manufacturing today's chips. Additionally, manufacturing-process characterization and yield analysis play a substantial role in the fab, working to ensure leading-edge manufacturing processes will achieve required yields—and ideally provide IC designers with accurate process parameters.

As the industry moves toward 65-nm technology and beyond, these three disparate segments within DFM must work together to successfully produce next-generation ICs that overcome today's daunting semiconductor economics. We believe new DFM-focused EDA tools, process-aware semiconductor IP, and better manufacturing process data will come together, resulting in better-yielding, higher-quality chip designs. DFM is here to stay!

Dennis Wassung is vice president of equity research at Adams Harkness, covering Advanced Design & Test Technologies. Contact him at dwassung@adamsharkness.com.

DFM: Closing the GAP Between Design and Manufacturing

Alex AlexanianPonté Solutions Inc.

Advances in semiconductor technology lead to a decrease in the minimum feature size of VLSI circuitry, introducing a higher number of devices per wafer; each device has a higher speed and lower power dissipation. Design and manufacturing complexities result in decreased product yield and reliability. Nowadays, yield is as much a design problem as it is a manufacturing problem. Yield prediction and yield improvement in the design stage aims at making ICs tolerant to manufacturing defects. Yield enhancement in the design stage will improve design quality, reduce the working silicon's cost, and accelerate time to market and time to volume for silicon products.

There are three important points to understand in achieving high-yield ICs in technologies below 100 nm.

First, design rules no longer work to transfer process information to the design flow. Process features have become so small that minor process variations now cause major fluctuations in design reliability. If design rules were enough, then we wouldn't have low yields in the first place. Two chips following the same design rules can yield differently as a result of this growing gap between design and manufacturing. Statistical-model-based verification solutions will complement or replace standard design rule checking approaches. Statistical yield models describing specific failure mechanisms can be used for characterizing future silicon in the design stage.

Secondly, industry needs open yield-modeling standards. Open standards will allow the entire semiconductor industry to contribute to the creation of effective statistical yield models. At the same time, these public and open yield models must protect proprietary process parameters and keep the process IP of different manufacturers confidential.

Finally, industry needs a yield-driven design methodology. Similar to timing considerations that initially started at the very back end and grew into the synthesis flow, yield considerations must exist at every stage of the IC physical-design cycle. IP vendors must provide yield-centric variations of library cells and custom IP blocks; fabs must provide statistical yield models. Designers must have design-for-yield solutions that provide analysis, prediction, and optimization capabilities for the synthesis of gate-level netlists, placement, routing, and full-custom layouts.

By providing these pieces to the design community, the industry will bridge the gap between design and manufacturing, and once again deliver high-yielding chips.

Alex Alexanian is the president and CEO of Ponté Solutions, Inc. Contact him at alex@pontesolutions.com; http://www.pontesolutions.com.

Reinventing Test for DFM and DFT in a Sub-100-nm World

Steve WigleyLTX Corp.Neil KellyLTX Corp.

As more device designs use sub-100-nm processes, high-speed I/O, and other advanced technologies, semiconductor test must not only test the resulting devices, but also reinvent itself. Today, manufacturers make most test decisions on an individual-device basis, typically employing a go/no-go choice. Test can no longer function as simply that type of decision node. Rather, it must become a decision and information node that helps maintain cost-effective, high-yielding processes. The solution relies in part on greater coordination of DFT and BIST strategies, as well as an increased focus on the impact that process variations have on design performance through DFM.

One example is in the area of IP for high-speed I/O that supports protocols such as PCI Express. Most of these implementations have BIST integrated into the physical layer and require significant design characterization before volume production. However, the nature of these designs and their operating speed make it possible for slight process variations to affect jitter content and have a major impact on I/O performance.

Through new, innovative offerings such as adaptive loop-back test technology, ATE providers will enable the cost-effective use of encapsulated test IP. Such IP will provide a fast ramp into volume production without compromising characterization. These offerings will provide full characterization of high-speed I/Os with or without using BIST, and will scale to a broad range of production test needs, such as tools for single- or multi-lane designs. In addition, enhanced on/off test and data collection capabilities will enable the capture of critical performance data, which can then feed back into production management systems to maintain yield levels and improve the DFM of future devices.

This is just one example of how semiconductor test will adapt to enhance the use of DFT and DFM techniques. New test requirements will continue to advance in line with device complexity, and test must continuously evolve to fulfill its new role as a key information node that helps maintain cost-effective, high-yielding processes.

Steve Wigley is vice president of product marketing for LTX Corp. Contact him at steve_wigley@ltx.com.
Neil Kelly is the chief technical officer at LTX Corp. Contact him at neil_kelly@ltx.com.

About the Authors

Bio Graphic
Juan-Antonio Carballo is currently a partner in IBM's Venture Capital Group, responsible for semiconductors, EDA, and open systems. He previously led research in the design and manufacture of adaptive communications chips at IBM Research, where he filed more than 20 patents in systems and circuit design, design economics, and design management. Carballo has a BS and an MS in telecommunications engineering from the Universidad Politecnica de Madrid, an MBA from College des Ingenieurs in Paris, and a PhD in electrical engineering from the University of Michigan. He chairs the International Technology Roadmap for Semiconductors design and system drivers chapters, and is the chair elect of the IEEE Committee on Design Automation.
Bio Graphic
Yervant Zorian is vice president and chief scientist of Virage Logic. He previously was the chief technology advisor of LogicVision and a Distinguished Member of Technical Staff at Bell Labs. Zorian has an MSc in computer engineering from the University of Southern California, a PhD in electrical engineering from McGill University, and an executive MBA from the Wharton School of Business, University of Pennsylvania. He is the IEEE Computer Society vice president for conferences and tutorials, founder and chair of IEEE 1500 Working Group, and a Fellow of the IEEE.
FULL ARTICLE
66 ms
(Ver 3.x)