Issue No. 05 - September/October (2006 vol. 23)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MDT.2006.120
Kenneth M. Butler , Texas Instruments
This special section of IEEE Design & Test of Computers, along with the International Test Conference 2006, highlights the value that test adds to the electronics manufacturing business. It leads us to think about test in a whole new way.
The theme for ITC 2006 is "Getting More out of Test," which is very appropriate in light of recent advances and changes in our industry. These days, everybody is talking about things like design for manufacturability (DFM), yield enhancement technologies, test-based outlier techniques, and the like. Based on these concepts, whole companies have been founded and have prospered, such as PDF Solutions, whose CEO was keynote speaker at ITC 2005. What makes these developments truly exciting is the role test plays in all of these new technologies. Test is truly the cornerstone on which the disciplines of yield and reliability engineering are built. And we're not just talking about characterization test or an occasional product lot, but large production volumes analyzed with new and ever-more powerful data mining and data reduction techniques.
We have also had to rethink what it means for a die, chip, board, or system to "pass" or "fail" a test. In the early days, particularly for digital products, we could always devise a test whose results were clear indicators of good or bad units. Yes, there was (and is) the perennial question of the test's coverage or thoroughness. But, that aspect related more to the effort level expended to incorporate good test-access mechanisms into the design and less to the technology in which the product was manufactured. Today, however, we see ample evidence of electronics failure mechanisms' increasingly subtle nature. We can view this problem from two perspectives: the "time zero" or "test escape" question, and the separate but equally important reliability aspect.
A good example of the former is the relatively recent proliferation of fault models and test approaches that various groups are advocating. Everybody continues to rely on the workhorse stuck-at fault model for bulk static defect coverage. But how long will that strategy continue to work for us? At what point must we supplement, or dare I say replace, stuck-at testing with other candidate test techniques such as N-detect tests, extracted bridging fault tests, or other nontraditional forms of testing? Authors in this magazine, at ITC, and at other venues continue to grapple with this question.
On the reliability side, the underlying mechanisms, such as channel hot carrier (CHC) effects and negative bias temperature instability (NBTI), have always been there. We have known about them for decades, but their impact on quality and product lifetime was relatively invisible to us. Unfortunately, that statement is no longer true. NBTI and other reliability mechanisms degrade product lifetime and performance and demand that we add margins for their occurrence. So, again, we must call on test to help us identify these problems when they occur, quantify the magnitude of the yield/reliability impact, and screen the material before it gets into the consumer's hands. Overall, therefore, we can see that test must play an ever-more-important role in more aspects of the electronics business.
The first article in this special section, "Extracting Defect Density and Size Distributions from Product ICs" by Jeffrey Nelson et al., is a classic example of learning all you can about the manufacturing process via production test. Today, the cost to construct and populate an IC wafer fabrication facility is measured in billions of dollars, and the cost of a mask set in an advanced technology is approaching or can exceed $1 million. The inevitable outcome of these spiraling costs is that fewer companies can afford to maintain captive IC manufacturing sites and thus are moving to fabless, foundry-based business models. But how do you learn and respond to important yield and defect Pareto information when design and manufacturing are in two completely separate companies, often geographically distant from each other, without having to devote costly wafer volume to test vehicles? This article addresses that important and timely question.
"Improving Transition Delay Test Using a Hybrid Method" by Nisar Ahmed and Mohammad Tehranipoor deals with the increasingly complex subject of delay test. Starting somewhere around the 130-nm technology node, and perhaps spurred by the advent of copper metallization, delay defects suddenly became something that, left untested, could result in too large an escape rate as seen by the customer. The industry responded in earnest by applying delay test techniques to large numbers of production ICs. Immediately, users of this technology discovered issues with things like pattern volume, realizable coverage, and test generation tool runtimes. This article is an example of the types of new thinking being applied to this problem to make delay test more tractable and more usable, thus getting more out of it.
The final article, "Impact of Thermal Gradients on Clock Skew and Testing" by Sebastià Bota et al., in some sense turns the ITC theme on its ear. To get more out of test, we must fundamentally understand not only its capabilities but also its limitations. As die sizes grow increasingly larger and clock rates continue to climb, so, too, do power requirements, driving die temperatures higher as well. Within-die thermal gradients can have negative effects on timing and clocking, which degrade testing's accuracy and results. This article systematically examines the issue of thermal effects, introduces a methodology for quantifying them, and proposes a design technique for counteracting them.
Taken as a whole, the articles demonstrate the changing role of test in the entire electronics industry and how it's not just for pass/fail anymore. Contributors to ITC, IEEE Design & Test, and numerous other IEEE test conferences and workshops are continually inventing and demonstrating new ways in which the test process can increase our rate of product and process learning, speed products to yield and reliability entitlement, and generally contribute more to our collective bottom line. I hope that this information will inspire you to come to ITC, see the presentations of articles like these, interact with their authors, visit the exhibits floor and see the new products that leverage the best test has to offer, and, most importantly, share your thoughts and ideas on how we can get more out of test.
I would like to take this opportunity to thank Editor-in-Chief Tim Cheng and the entire IEEE D&T editorial staff for their encouragement and assistance in producing this special issue.
Kenneth M. Butler is a TI Fellow at Texas Instruments in Dallas. His research interests include outlier techniques for quality and reliability and test-data-driven decision making. Butler has a BS from Oklahoma State University and an MS and a PhD from the University of Texas at Austin, all in electrical engineering. He was the program chair of ITC 2005 and currently serves on the program and steering committees. He is a Senior Member of the IEEE and a member of the ACM.