The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - January/February (2004 vol.21)
pp: 65-66
Published by the IEEE Computer Society
Hans-Joachim Wunderlich , University of Stuttgart
Sandeep K. Shukla , Virginia Tech
ABSTRACT
<p></p>
Leveraging Infrastructure IP in the Field
Hans-Joachim Wunderlich, University of Stuttgart
The 1st IEEE International Workshop on Infrastructure IP (I-IP 2003) hosted the IEEE Design & Test co-organized panel, Leveraging Infrastructure IP in the Field, at International Test Week in Charlotte, North Carolina. Moderator Dean Adams (Pleiades Design and Test Technologies) introduced the panel with a summary of infrastructure IP. He pointed out some important infrastructure IP techniques such as BIST, built-in self-repair, built-in repair analysis, error-correcting codes, chip ID, fuse techniques, electrostatic discharge (ESD) protection, and thermal measurement. An important objective of all of these infrastructure IP types should be ease of use for chip designers and chip users. Furthermore, different aspects of these infrastructure IP functions can serve purposes other than that of chip production.
Mike Ricchetti of Intellitech pointed out the use of testing, debug, and diagnosis throughout the complete life cycle of a system. He emphasized the role of programmable technologies in enabling reconfiguration and reprogramming in the field at board and system levels.
Matteo Souza Reorda of the Politecnico di Torino brought up the most widely used fault tolerance aspects of infrastructure IP in the field. In addition, for verifying and validating the fault tolerance properties of a system, he proposed integrating fault injection cores into systems on chips for use throughout the life cycle.
According to Mouli Chandramouli of Virage Logic, the rising cost of field maintenance, combined with a lack of adequately qualified field personnel, is pushing the electronics industry toward embedded infrastructure IP for manufacturing and maintenance. He noted that in the case of embedded memories, infrastructure IP for manufacturing test and repair, and for reliable operation, is reusable in the field for periodic maintenance and to support correct and reliable operation.
Sang Baeg of Cisco took the system perspective. He asked for a component-level technique that could also be used at the system level. He noted the need for very high availability, especially for his applications. Currently, high-end systems just use parity checking to detect intermittent failures. Baeg would not apply soft repair (reconfiguration via multiplexers) in the field for reliability and availability reasons.
Miron Abramovici of Design Automation for Flexible Chip Architectures (DAFCA) proposed that the mother of all infrastructure IP is reconfigurable logic. It can serve as a tester for other cores and can support fault tolerance, design error repair, manufacturing test, and yield improvement. His vision is that all systems on chips should have a reconfigurable IP platform.
Although the entire panel agreed on the reuse of infrastructure IP at different levels and phases in the life cycle, it turned out that the specific aspects for reuse can also depend on the applications involved.
NANO-, MOLECULAR, AND QUANTUM COMPUTING: WITH THE SCIENCE OF THE SMALL COMING IN BIG WAYS, ARE WE READY FOR THE VALIDATION AND TEST CHALLENGES?
Sandeep K. Shukla, Virginia Tech
On 13 November 2003, the IEEE High-Level Design Validation and Test (HLDVT) workshop organized a panel discussion to start the workshop on a controversial and interesting note. Ramesh Karri (Polytechnic University) and I organized the panel, which featured three participants: Seth C. Goldstein (Carnegie Mellon University), Forrest Brewer (University of California, Santa Barbara), and Sankar Basu (US National Science Foundation). I moderated the panel.
I opened the panel, justifying this workshop's consideration of such a futuristic issue by introducing the audience to the technologies involved. These include the advances in nanotechnology, carbon nanotubes, single-electron transistors, quantum dots and quantum cellular automata, and molecular switches, all of which will influence the dimension and scale of the manufacture of nanoscopic devices. The major challenge in these technologies will be operating conditions amidst a wide variety of physical limits, such as a thermal limit to computation, reduced noise margins, and resulting defects in computing.
Architectural innovations in carrying out computation in the presence of defects seem to be one way to circumvent some problems. However, another solution might be new models of computation, that is, new ways of thinking about computation. Famous physicists and others at the 2003 IEEE Nano Conference speculated about how soon these technologies would appear in computing and when they should concern the high-level-design architects and validation engineers. Those speculations mainly showed that the research is still 10 years from completion, but it is never too early to start tackling these problems.
Seth Goldstein, who has recently been working on reconfigurable and defect-tolerant computing, gave an enlightening and extended panel talk. He explained how to identify defects and avoid defective parts on a reconfigurable nanofabric. He also raised issues regarding existing computing artifacts, such as instruction set architectures (ISAs) and rectangular routing and layouts, noting that these are artifacts of the human-centric design process. Goldstein insisted that in future technologies (where self-assembly and self-organization or induced self-adjustment of devices will be common), human designers won't necessarily involve themselves in the design loop as they do now. As a result, we designers might have to change these artifacts and substitute them with artifacts natural to mechanistic design paradigms. Goldstein also pointed out that the massive parallelism germane to future miniscule-level technologies will lead to new validation challenges. He strongly advocated that the time is now, rather than later, to consider these issues and think about new computing paradigms. Architecture, test strategy, fault models, reconfigurability, models of computation, and synchronous versus asynchronous designs are some areas of the design world that are bound to change.
Following this enlightening and controversial stand, Forrest Brewer began his shorter exposition by describing computing systems as a multiscale phenomenon. By multiscale, he did not refer to size, but rather to scales in terms of coherence, clocking, connection, feedback, and quantum effects. Today, computing performed on silicon has multiple clock domains, so much of computing deals with maintaining coherence by matching these clock domains. Similarly, his clocking and connection scales deal with the maximum scale on which to build a synchronous clock domain, which is subject to timing jitter and predictability issues. Determining this maximum scale is an exercise in trading off clock speed, clocked area, number of taps, and power to maintain adequate noise margins. The feedback scale deals with communication within sequential elements, and thus also with time scales within a single change of logic state. This scale is the time period at which interconnect delays are inconsequential relative to circuit-switching rates. Finally, the quantum-effects scale deals with tunneling, spin coupling, and other communication mechanisms at the quantum level. These design scales provide a framework for determining how communication issues at one level propagate to other levels in the system.
Perhaps the most interesting aspect of Brewer's exposition was his treatment of the uncertainty principle as a fundamental limit to nanoscopic computation. For example, he stated that tradeoffs will occur between latency and errors. That is, achieving less-erroneous computation comes at the cost of reduced latency (or energy), and vice versa. He claimed that initial nanoscopic circuits would probably be nano-enhanced memories and that they would appear in the next three to five years. He also claimed that timing would be the easiest tradeoff point of concern to architects and high-level modeling and validation engineers, at least in the near term.
Finally, Sankar Basu, an NSF program manager for CAD for micro- and nanometer technologies, discussed the National Nanotechnology Initiative and nanotechnology opportunities at NSF. He also touched on some important points from a recent summit on major issues in nanocomputing.
The panel concluded with questions from the audience, including the question of whether or not nanometer-scale computing is necessary, defect tolerance issues, and the role of probability-based modeling in nanometer design. With the recent buzz around nanotechnology, this was a great introduction to issues that confront computing communities, especially those involved in high-level CAD and validation.
40 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool