The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2011 vol.28)
pp: 84-85
Published by the IEEE Computer Society
ABSTRACT
<p>Panel Summaries reports on two ITC 2010 panels: "Concurrent Test Supported by DFT Techniques" and "ATE Companies and How Smart Does Our Silicon Need To Be?".</p>
ITC 2010: Concurrent Test Supported by DFT Techniques and ATE Companies
Ralf Arnold, Infineon Technologies
Concurrent test needs support from both DFT and ATE. Many devices are suitable for concurrent testing, but what about ATE hardware and software? Do they support concurrent testing adequately? What are the different philosophies in DFT and ATE hardware and software?
In the "What is Concurrent Testing?" panel at the 2010 International Test Conference, panel moderator John Carulli (Texas Instruments) succinctly answered that question: Concurrent testing (parallel testing within a device) is a method, in addition to the multisite testing (that is, parallel testing across devices) method, designed to reduce the cost of testing. Concurrent testing can be achieved only by working across boundaries such as device operation, DFT awareness, tester capability, and software development.
From the chip manufacturer side, panelist Ralf Arnold (Infineon Technologies) stated that DFT techniques are already available to generate concurrent testable devices, and showed examples of how Infineon's automotive power designs could use concurrent testing. Arnold said he believes the most structured method to describe the concurrent test ability is a machine-readable concurrent test matrix. The matrix describes how to run the different tests in parallel to see how they interact. Arnold went on to discuss the main differentiator of future ATE, which he believes will be ATE software capabilities in principle and as a main topic to run tests concurrently. Arnold also noted that concurrent testing can be very helpful in post-silicon validation (although it is very difficult for complex devices).
Panelist Erik Volkerink (Verigy) observed that concurrent test starts with DFT, during which concurrent testability is defined. Panelist Randy Kramer (Teradyne) concurred, noting that DFT is the key to success because DFT drives the levels of device concurrency, which determines concurrent test efficiency. Like the efficiency of test program development in general, the efficiency of concurrent test development, in particular, depends on the ATE software (environment) and the ATE hardware (system architecture).
Another panelist, Mani Balaraman (Advantest), argued that enabling concurrent testing is the key goal that users want for ATE. Concurrent test flows should be implemented with minimal test engineering effort and lower capital cost for users. Through the test program, engineers can change the test setup by choosing either the concurrent mode or the nonconcurrent mode in addition to an easily usable debugging environment. Additionally, seamless resource management for concurrent test flows, such as threaded-instrument resource slicing and intelligent resource management, help to maximize tester resource utilization (and thus lower the capital cost) for concurrent test.
Following the panel presentation, an audience member from a communication device manufacturer remarked that their devices have been able to use concurrent tests for several years but that it is very painful to program today's test systems to enable concurrent test given the present capabilities of ATE concurrent support features.
Overall, it seemed that chip vendors are ready, from the DFT side, to begin using concurrent testing, and that the community agrees on what needs to be done and how beneficial it is. It is hoped that the ATE vendors will come out soon with ideas and methods so the test community can make the next evolutionary step in test cost reduction.
ITC 2010: How Smart Does Our Silicon Need To Be?
LeRoy Winemberg, Freescale Semiconductor
As process geometries drop below 65 nm, the differences between design models and manufactured silicon become unacceptably large. Material variability increases dramatically, and the results can be very unpredictable with regard to product performance. Up to now, the industry-wide solution has been to use large guardbands to compensate for the difference between models and silicon. However, the use of excessive guardbanding is expensive as it tends to leave a lot of performance on the table, makes timing closure more difficult, is typically inaccurate, and in the end usually leads to lost revenue.
At the 2010 International Test Conference in the "How Smart Does Our Silicon Need to Be?" panel, coordinated by LeRoy Winemberg (Freescale Semiconductor) and moderated by Ken Butler (Texas Instruments), company and university experts discussed methods that technologists have tried in efforts to help close the gap between model and silicon reality. These approaches include better presilicon characterization techniques and data collection, static timing analysis (STA)—more recently, statistical STA—and others. The consensus: with both increasing random variability and aging effects on design reliability, an expensive gap exists in terms of both time and money.
All panelists agreed that a new approach is required to solve this problem and that a viable solution is the use of "advanced" on-chip sensors and monitors (i.e., embedded circuits more advanced than just simple ring oscillators) to collect data from the manufactured designs themselves. The benefit of this approach is that the data collected on-chip by these circuits or monitors could be used to fine-tune the design models for subsequent designs.
Sachin Sapatnekar (University of Minnesota) and Mohammad Tehranipoor (University of Connecticut) agreed that the idea can be taken a step further: these monitors could also be used by the design to adapt itself to aging and reliability effects that can vary both temporally and spatially but not always uniformly. Also, the design could use these monitors to adapt to process variations, such that the design would continue to operate at an optimum or near-optimum point. Moreover, there is the possibility of self-correction by using the sensors and monitors, which could potentially be used either at manufacturing test to increase yield, or in the field to increase product quality, or both.
Gordon Gammie (Texas Instruments) pointed out that it is nearly impossible to build a monitor that exactly mimics the behavior of a complex chip, so it is important to understand the monitor's limitations and build in flexibility. All panelists believed that if these self-tuning circuits are either inaccurate or not designed properly (i.e., with a proper understanding of the impact they can have on the circuit design), the financial benefit from the monitor-based self-tuning and adaptation could be lost. For example, what controls should be available to the monitor's self-adaption and tuning? What about supply voltage, frequency, adaptive body bias, or control of power islands? There are likely many other issues to be resolved. The wrong balance or mix of factors (such as power islands, voltage, frequency, and so on) being controlled could result in the loss of performance, power, and reliability.
Phil Nigh (IBM) held a very positive outlook on the potential uses of these embedded circuits. He felt this idea could be further extended so that these embedded circuits could also be used for design characterization, manufacturing debug, and diagnostics, and that some fraction of these circuits could even be provided to the end customer for use in debug and/or characterization at the card and system levels. He was especially interested in the use of these monitors for power supply noise and IR-drop correlation between the IC vendors and their customers. However, he pointed out that standards would need to be developed for connection and control of these on-chip instruments such as the proposed P1687 standard from IEEE.
In closing, these panelists called for more research across both academia and industry on this cutting-edge topic, especially development of the best approach for sub-65-nm silicon designs. Questions under consideration are clear: which embedded circuits make the most sense (aging, enablement of more aggressive design, characterization, debug, correlation, etc.); is standardization necessary, and if so, how much, or are these embedded circuits not necessary in the first place because there are better approaches to the problem?
16 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool