The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - March-April (2008 vol.25)
pp: 112-113
Published by the IEEE Computer Society
Scott Davidson , Sun Microsystems
Nur A. Touba , University of Texas at Austin
ABSTRACT
This special issue represents a snapshot of the progress in test compression, a key strategy for dealing with rapidly growing test data volume. Test compression involves encoding test data in a compressed form so that less data needs to be transferred, thereby reducing test time and the need for tester memory. A wide variety of test compression techniques have been developed, both for compressing test vectors and compressing output responses. In recent years, several researchers and companies have developed compression methods and products that achieve significant amounts of compression. These new methods have extended the life of legacy ATE and have been synergistic with the need for additional tests to detect the defects arising in nanometer designs. Thus, test compression continues to be a very active area.
This special issue represents a snapshot of the progress in test compression, a key strategy for dealing with rapidly growing test data volume. Since the beginning of this decade, test compression has emerged to become an important DFT market segment.
Test data volume has grown dramatically with increasing design size and the need for additional tests to target defects in nanometer designs, including transition, path delay, and bridging faults. The limited bandwidth between the ATE and the chip under test poses a major bottleneck, as does memory available for the storage of test vectors. Test compression involves encoding test data in a compressed form so that less data needs to be transferred, thereby reducing test time and the need for tester memory.
A wide variety of test compression techniques have been developed, both for compressing test vectors and compressing output responses. Test vectors are highly compressible because typically only 1% to 5% of their bits are specified (care) bits. The rest are don't-care bits, which can have any value with no impact on fault coverage. The main challenge for compressing the output response is determining how to handle unknown (X) values that might be present. There are various possible sources of X values, including uninitialized memory, bus contention, nonscanned elements, floating tristates, and multicycle paths.
In recent years, several researchers and companies have developed compression methods and products that achieve significant amounts of compression. These new methods have extended the life of legacy ATE and have been synergistic with the need for additional tests to detect the defects arising in nanometer designs. Thus, test compression continues to be a very active area.
The first article in this special issue, "Historical Perspective on Scan Compression," by Rohit Kapur, Subhasish Mitra, and Thomas Williams, traces the incremental research in test technology over five decades that led to scan compression. They show how the focus of test has changed along with changes in technology and how scan compression has evolved from earlier work.
The second article, "VirtualScan: Test Compression Technology Using Combinational Logic and One-Pass ATPG," by Laung-Terng Wang et al., describes a test compression methodology that uses a broadcaster and compactor based on combinational logic. The broadcaster expands data from the tester channels to fill a larger number of scan chains, and the compactor compresses the output response from a larger number of scan chains down to a smaller number of channels going back to the tester. A key advantage of this test compression methodology is its simplicity, which allows the use of a one-pass ATPG that takes into account all the constraints that the compression architecture imposes.
In "UMC-Scan Test Methodology: Exploiting the Maximum Freedom of Multicasting," Chao-Wen Tzeng and Shi-Yu Huang propose a methodology for providing greater flexibility and reduced control-bit overhead when decompressing test vectors in a multicasting architecture. Many test vector compression schemes broadcast data from one tester channel to concurrently load multiple scan chains. However, if two scan chains are always loaded with the same values, some faults will not be detected. One solution is to periodically reconfigure the set of scan chains to which each test channel is broadcast, but this approach requires additional control information. This article proposes a more flexible approach with reduced control-bit overhead.
Finally, "Hierarchical Test Compression for SoC Designs," by Kee Sup Kim and Ming Zhang, addresses test compression in hierarchical SoC designs. In many such designs, some cores have their own test compression circuitry, which then must be integrated with a second level of compression at the full-chip level. Of particular concern is output response compaction, in which X-handling and aliasing problems can arise. This article describes a systematic way of designing the second-level output compaction logic to preserve X-handling and multiple-error tolerance capabilities.
This special issue concludes with a Last Byte column by Scott Davidson, which shows how test compression helps fill the continuum between ATPG and logic BIST. This is an important need, as evidenced by the rapid adoption of test compression technology by industry.
We hope you enjoy &SetFont Typeface="44";this special issue. We thank all the authors who have shared their time and expertise to compose these high-quality articles. We welcome any feedback you may have, and we hope this special issue will help foster more discussion on this important emerging topic.


Scott Davidson is a senior staff engineer in the Microprocessor Quality Group of Sun Microsystems. His research interests include analysis of integrated field returns, DFT, and test generation. Davidson has a BS in electrical engineering from the Massachusetts Institute of Technology, an MS in computer science from the University of Illinois at Urbana-Champaign, and a PhD in computer science from the University of Louisiana, Lafayette. He is a member of the IEEE and the IEEE Computer Society. He is the department editor of "The Last Byte" and a book review editor for IEEE Design & Test.


Nur A. Touba is a professor in the Department of Electrical and Computer Engineering at the University of Texas at Austin. His research interests include DFT and dependable computing. He has a BS from the University of Minnesota, Twin Cities, and an MS and a PhD from Stanford University, all in electrical engineering. He is a senior member of the IEEE.
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool