Guest Editors' Introduction: Defect-Oriented Testing in the Deep-Submicron Era
SEPTEMBER/OCTOBER 2002 (Vol. 19, No. 5) pp. 5-7
0740-7475/02/$31.00 © 2002 IEEE

Published by the IEEE Computer Society
Guest Editors' Introduction: Defect-Oriented Testing in the Deep-Submicron Era
Jaume Segura , Universitat de les Illes Balears

Peter Maxwell , Agilent Technologies
  Article Contents  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 
CMOS IC scaling increases device/interconnect density to allow more logic on a die at higher clock rates, enhancing overall performance. Improvements in process technology enable integration on a single die of circuits with different functions that require distinct manufacturing process steps. With added constraints of reduced time to market, the complexity of CMOS ICs has grown steadily during the past few decades. Shortened development times require shorter design, verification, and manufacturing cycles, as well as more efficient and accurate test and debugging methods. Despite impressive trends in the microelectronic industry, deep-submicron technologies—especially below 0.18 micron—face several important test-technology challenges. This special issue presents five articles that address some of these challenges.
Defect-oriented testing emerged at the beginning of the 1990s as a solution to the limitations of traditional logic fault models. The lack of solutions offered by logic fault models was especially severe for high-reliability applications. Defect-oriented test strategies first analyze defect properties, and then determine the best test technique for detection. This IC test approach led to parametric test methods focusing on IC parameters other than the logic circuit behavior. Such strategies complemented logic-based tests and enhanced product quality, often serving as additional reliability screens.
Deep-submicron devices brought new physical phenomena that pushed the limits of manufacturing capabilities. The miniaturization laws in this domain generated a new scaling approach—constant field scaling—as an alternative to constant voltage scaling. Constant field scaling dictates lower supply voltages for successive technology nodes to prevent the electric field across the MOSFET transistor gate oxide from surpassing reliability limits. Lowering the supply voltage affects circuit performance directly. Transistor threshold voltage reduction compensates for this performance change by maintaining acceptable gate overdrive voltages, and therefore adequate saturation currents. The threshold voltage reduction, in turn, affects transistor off-state current (also called leakage current), which depends exponentially on this parameter. Rising off-state current implies higher quiescent supply current at the circuit level, impacting the effectiveness of traditional single-threshold I DDQ testing—one of the most effective parametric test techniques used in defect-oriented testing. Several approaches extend I DDQ-based methods to deep-submicron technologies. These techniques correlate different I DDQ measurements from the same die or several dies to enhance I DDQ sensitivity. Some of these methods perform a pre-analysis on a few dies, whereas others require a post-test statistical analysis. The first article in this special issue, by Sabade and Walker, reviews these techniques and analyzes their merits and limitations, as well as their projections for future technologies.
Traditional defect-oriented test methods have mainly targeted detection of bridges because this defect mechanism was considered the most probable in previous CMOS generations. Although researchers have made significant advances in the characterization and detection of open defects, the work focused mainly on hard opens (defects causing a complete interconnect isolation). Researchers have given less attention to weak opens, which do not cause a total line disconnect but introduce a significant line resistance and its corresponding signal delay. This defect mechanism will grow in deep-submicron technologies mainly because of the increased interconnect levels (currently around nine or 10 levels) and the elevated number of vias and contacts in the circuit. A 10-million-transistor circuit might have 10 times more vias, along with the corresponding challenges associated with their verification. Weak opens can be difficult, if not impossible, to catch in production testing, because they induce subtle, difficult-to-detect timing disturbances. Many of these weak opens represent increased reliability risks and can cause hard opens once the circuit is operating in the field. The second article, by Rodríguez Montañés et al., provides a detailed study about the properties and distribution of weak opens in deep-submicron technologies. The article also projects how this important defect mechanism will affect future scaled technologies.
Other challenges faced by test technology are noise generation and coupling mechanisms (crosstalk and switching noise), traditionally identified as purely design concerns. Increased operating frequencies require fast signal transitions and sharp current demands that generate noise in the circuit. The problem is further aggravated by the aspect ratio of lower-level metal interconnects, whose height-to-width ratio is increasing. This increase maintains the line resistance within acceptably low values while reducing its width to allow more interconnect lines in the same area. Because interconnect pitch is also decreasing, the overall effect is a substantial increase in coupling capacitance.
The interconnect system's complexity has grown significantly in high-performance ICs, where inductive coupling must be considered in some cases, depending on the relation between the signal transition times and the length of some interconnects. Moreover, the characteristics of the substrate where the devices are constructed make the circuit bulk a good structure for noise distribution throughout the entire IC. Supply voltage scaling, which imposes tight conditions on the signal-to-noise ratio required to distinguish between logic levels, exacerbates these noise generation and propagation effects. The voltage supply levels of today's technology were the noise margins of older ones. The interconnect system's complexity makes its full characterization impractical for leading-edge designs. This limitation has forced researchers to adopt ad hoc and simplified techniques—with the corresponding loss in accuracy—to predict this subsystem's electrical behavior. The third article, by Aragonès et al., provides a detailed analysis about noise generation and propagation in CMOS ICs.
A major challenge faced by IC manufacturers is process parameter fluctuations. As device sizes fall below 0.1 micron, process control becomes far more difficult, resulting in circuits with large electrical and geometric parameter variations within a die and between dies. Parameter variation spreads macroscopic circuit properties such as maximum operating frequency and total power dissipation. Parameter variations are aggravated with supply voltage noise and temperature gradients on the die. This is one of the major problems faced by future generations of test technology. It affects test methods based on parameter limit setting (like delay fault testing or I DDQ testing), in which pass/fail limits must be established individually for each circuit. The fourth article, by Keshavarzi et al., discusses test techniques and methods to handle parameter fluctuation in production testing. These are related to multiparameter correlation and statistical methods to expose outlier dies from the main population. The article presents a technique to dynamically adjust for parameter spread in a die, setting circuit operation within desired ranges.
We have learned that design and test are two sides of the same coin and that the circuit test plan must start during the design phase. Complexity and economic reasons related to time-to-market pressures have driven this view, and DFT practices that support circuit verification have made today's massive ICs possible. One concern in such massive ICs is power delivery. Circuit activity in high-end ICs is relatively low; typically 20% to 30% of the total circuitry is working in normal operation. The power distribution and delivery system is sized to handle such activity levels. Usually, during IC test, the circuit's operation mode is significantly different from normal operation and generates substantially higher circuit activity. This can be controlled when directing the test process from ATE but is not the case when implementing built-in self-test approaches, which generate test sequences and analyze output responses through on-chip circuitry. This circuitry trades off between area overhead and defect coverage. The last article, by Girard et al., presents a BIST technique that accounts for circuit activity, achieving high defect coverage while maintaining reduced circuit activity.

Jaume Segura is an associate professor at Universitat de les Illes Balears, Palma de Mallorca, Spain. His research interests include device and circuit modeling and VLSI design and test. Segura has an MS in physics from Universitat de Illes Balears, and a PhD in electrical engineering from Universitat Politècnica de Catalunya.

Peter Maxwell is a DFT consultant in the Semiconductor Products Group of Agilent Technologies. His research interests include test methodologies, DFT, and test method effectiveness. Maxwell has an MSc in physics from the University of Auckland, New Zealand, and a PhD in electrical engineering and computer science from the Australian National University.