The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - March/April (2006 vol.23)
pp: 164-166
Published by the IEEE Computer Society
ABSTRACT
Summaries of panel sessions from the 2005 International Test Conference.
The technical program of the International Test Conference includes the opportunity for attendees to engage in informal and entertaining discussions and debates during its popular panel sessions. For the 2005 program, industry and research experts addressed a wide range of subjects, including topics both old and new, to educate and entertain the test community.
Here, the ITC 2005 panel organizers capture the deliberations and results from their panels for those D&T readers who might not have had the opportunity to attend the conference or parallel panel discussions. Additional details are available in the panelist position statements included in the ITC 2005 proceedings on CD.
If you are an industry or academic professional with an idea for a hot panel topic and are interested in organizing a panel for the next ITC, please visit the ITC Web site ( http://itctestweek.org) for more information or contact a member of the ITC 2006 panel program committee: Carol Stolicny (carol.stolicny@intel.com) or Fidel Muradali (f_muradali@yahoo.com).
Have we overcome the challenges associated with SoC and multicore testing?
Organizer and moderator: Sankaran Menon, Intel
Panelists: Nathan Chelstrom, IBM; Raj Raina, Freescale Semiconductor; Tim Wood, AMD; and Yervant Zorian, Virage Logic
This panel brought SoC and multicore DFT experts from various industries together to debate whether the test community has overcome the challenges associated with SoC and multicore testing. Such testing must cover both homogeneous and heterogeneous combinations of cores and SoCs. Testing multiple cores and SoCs that are connected leaves several unicore logic circuits (also known as glue logic) untestable. Structure-based scan testing has been in use for more than a few decades. For SoC and multicore architectures, such testing can drastically reduce manual test generation efforts. However, one of the challenges of multicore scan testability is that even though it obtains high coverage at the level of the individual reuse core, obtaining high coverage when the reuse cores are combined on a single die or at the SoC level is more difficult.
The panel was very interactive—a majority of the 150-plus attendees participated in a poll at the beginning of the panel to answer the question in the panel's title: Have we overcome the challenges associated with SoC and multicore testing? The results of the initial poll showed a few raised hands to indicate that perhaps we have overcome the challenges associated with both SoC and multicore testing.
Two of the panelists took the stance that we have overcome the challenges, with one panelist agreeing that we have overcome the challenges albeit using innovative techniques. The second panelist said that we have overcome the challenges by walking away from them. Two other panelists believed that the challenges associated with SoC and multicore architectures are multifold and relate to scan-chain allocation, pin limitations, diagnosis, pattern reuse, identifying defective cores, and so forth—all of which require a disciplined approach to current changes in the methodology. One panelist highlighted the complexities associated with using reuse macros for SoCs, stating that without a structured approach the challenges would be insurmountable. The panelists discussed core-internal test, core test access, and integrating on-chip test capabilities, with yield optimization and test escape reduction requiring a set of new techniques to address today's challenges.
The audience's questions covered topics ranging from the complexities associated with multiple JTAG TAP controllers in SoCs, to the reuse of multiple homogeneous and heterogeneous cores. There were a few interesting questions related to IEEE 1500 becoming a standard and the unavailability of tools, which is hampering the insertion of the new reuse core standards by the test community. The poll that was conducted at the beginning of the panel session was again conducted at the end of the session, and the show of hands indicated that almost all agreed we have not yet overcome the challenges associated with SoC and multicore testing.
Final D-frontier: Should DFT be outsourced?
Organizer: Luis Basto, Texas Instruments
Moderator: Ben Bennetts, Bennetts and Associates Panelists: Yu Huang, Mentor Graphics; Carl Holzwarth, Synopsys; LeRoy Winemberg, Freescale Semiconductor; and Jeff Roehr, Analog Devices
Major companies are slashing jobs by the thousands in the US and Europe while at the same time opening design centers and creating jobs by the thousands in other countries, most notably China and India. At the same time, graduation rates for engineering and computer science are dwindling in the US. To outsource or not to outsource, that is not the question. The panelists and most of the audience agreed that outsourcing is a fact of life, so the focus is on how to do it successfully.
Panelists agreed that although Asia has an attractive large pool of low-cost engineers, finding qualified and skilled personnel is still tricky and wrought with pitfalls. One panelist mentioned that many contractors would claim they have DFT experience even though their last encounter with DFT might have been running a scan insertion script on a small module several years ago. Other difficulties cited include keeping the work pipeline full in good times and in bad. It is important to maintain a working relationship even during lean times because, if you let go of good contractors, those people might not be available when you need them.
Because the design and synthesis process is so closely tied to DFT insertion, it is hard to decouple the tasks; so outsourcing one without the other is neither easy, nor does it make sense. In addition, other risks include the high turnover of contractors and security with entities that do not have a good track record in terms of IP protection.
One panelist posed an interesting question: Do we make our own clothes? The trend is toward more and more complex business processes, and cost is not the only factor. It is necessary to account for competitive pressure, service quality, and the availability of qualified personnel. Companies must carefully weigh the loss of in-house expertise in the outsourced tasks.
The issue of analog DFT also came up. Whereas designers who work on digital circuits have vastly more tools and expertise at their disposal to handle DFT, analog mixed-signal designs are often highly customized with ad hoc DFT. It is therefore more difficult or even impossible to outsource this type of DFT.
Discussion and debate were quite lively between audience and panelists since there was very little debate among the panelists. One audience member who is a DFT outsourcer in the US remarked that cost is never the first issue raised in his contacts with customers. Rather, people look for experience with previous successful projects. One comment from the audience that some CEOs or CFOs should be outsourced drew heavy applause.
So how do you do it "right?"
It is possible to outsource DFT, but it takes careful homework. Establish a good partnership with the contractor. Reward them for a job well done. Do not just give them boring, repetitive tasks that no one wants. Charge them with interesting and challenging projects. But most importantly—communicate, communicate, and communicate. The session ran out of time and did not cover IP protection and the language barrier in sufficient depth.
And now a parting thought, from the words of Deng Xiaoping: "It does not matter whether the cat is black or white. If it can catch mice, then it's a good cat."
Is the concern for soft errors overblown?
Organizer and scribe: Sandip Kundu, University of Massachusetts, Amherst
Moderator: Kaushik Roy, Purdue University Panelists: Rajesh Galivanche, Intel; Vijaykrishnan Narayanan, Penn State University; Rajesh Raina, Freescale Semiconductor; and Pia Sanda, IBM
This Tuesday evening panel session provided an opportunity for the panelists and the audience to discuss whether the concerns about soft errors are overblown.
Citing concerns about the reliability and availability of servers and quoting data center managers, two panelists took the position that the concern is not overblown. A third panelist claimed that the concern has always loomed, and it will most likely remain only a concern rather than become a problem. The fourth panelist took the emphatic position that although soft error is a concern, it is overshadowed by other concerns related to noise and failures resulting from dynamic voltage scaling, thermal hot spots, and a host of process- and design-related issues.
During the debate, a panelist yielded the position that soft error is indistinguishable from errors from other sources, and attributing all errors to soft error might magnify the issue. All panelists agreed that since soft error is indistinguishable from other sources of errors, it is better to build solutions that work for all error sources rather than for a specific source.
Audience members presented concerns from their specific companies. Based on the opinions during the discussion, it appears that the communications and signal processing community worries less about soft error than the computing community. Within the computing community, those closer to silicon design appear more concerned with design marginality errors whereas those at higher levels of design seem to worry more about soft errors.
The position takers presented strong data and facts. The audience remained engaged throughout the debate, and the discussion remained purely technical. The most notable point that emerged is that, at the system level, ordinarily, soft error is indistinguishable from any other source of error. Thus, designing for soft error alone is unwise.
ITC test compression shootout
Organizer and moderator: Scott Davidson, Sun Microsystems
Panelists: Brion Keller, Cadence; Kee Sup Kim, Intel; Janusz Rajski, Mentor Graphics; and Shianling Wu, Syntest Designated inquisitor: Al Crouch, Inovys
Test compression represents the hottest new DFT area discussed at ITC. Test compression tools combine the low tester memory requirements of logic BIST with the higher fault coverage of deterministic ATPG. This panel's purpose was to compare test compression techniques on the same test case.
All test compression vendors were invited to participate; those who participated included three tool vendors, a person who developed a solution internally, and an academic. Panel organizers provided all of them with the ITC99_2 benchmark: a real, though small, production ASIC. The panelists provided compaction efficiency and described how well their tools worked on the benchmark. They did not provide area overhead because the benchmark's small size made such a figure misleading. Organizers did not ask panelists to provide fault coverage because the comparison was on vector compression, not vector generation. However, the panelists did provide this information.
All panelists were able to generate and compress vectors for this design. Several modified the design by replacing the internal latches with flip-flops, and modifying internal tristate buses with pullups or pulldowns to reduce the number of internal unknowns.
Most of the panel consisted of presentations about the panelists' compression techniques and their benchmark results. Many panelists provided results with and without DFT issues fixed, and for different tool parameter settings. Questions included requests for more details about the results and why certain techniques were or were not used.
The panel drew a tremendous audience, indicating the continued high level of interest in this topic.
19 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool