The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - July-September (2000 vol.17)
pp: 12-14
Published by the IEEE Computer Society
ABSTRACT
This special issue on benchmarking for design and test represents a snapshot of a continuing effort to collect and utilize larger and more sophisticated benchmarks for design and test tools and algorithms. Before we start, let's discuss what we mean by benchmarking, explain why we want to have benchmarks, and provide a short history of benchmarking efforts.
This special issue on benchmarking for design and test represents a snapshot of a continuing effort to collect and utilize larger and more sophisticated benchmarks for design and test tools and algorithms. Before we start, let's discuss what we mean by benchmarking, explain why we want to have benchmarks, and provide a short history of benchmarking efforts.
Progress in the design and test of integrated circuits (ICs) is dependent upon the development of new algorithms and the tools that implement them. These algorithms and tools are also driven by the growth in IC complexity. Larger designs drive better tools, which enable the creation of still larger designs. This has led us from ICs with hundreds of transistors running at kHz frequencies to current designs with tens of millions of transistors running at GHz frequencies.
These new tools and algorithms have been developed at universities, electronic design automation (EDA) companies, and large vertically integrated companies such as AT&T, IBM, and Intel. In the 1960s, 1970s, and early 1980s schematics of large-scale integration (LSI) ICs were freely available. Anyone could look in a Texas Instruments data book, for instance, and see the gate-level design of an arithmetic logic unit (ALU) or counter.
These were the first benchmarks. What is a benchmark for design and test? The online Merriam-Webster Collegiate Dictionary ( http://www.m-w.com/cgi-bin/dictionary) defines benchmark as "2c: a standardized problem or test that serves as a basis for evaluation or comparison (as of computer system performance)."
In our sense a benchmark is a standardized problem (circuit or circuit segment) used to compare the performance of different tools and algorithms in terms of speed, effectiveness, and quality of the result. This could be the size or performance of a synthesized design, its fault coverage, or percentage of routing automatically completed. However benchmarking has another benefit in measuring whether a tool or algorithm can handle a problem at all. Sequential circuits, or circuits described at the behavioral level, might require not just better but rather different tools and techniques. Posing such new problems drives innovation, and is one of the greatest benefits of benchmarking.
The history of benchmarking shows this. As designs grew in complexity, and the cost of these designs grew, it became harder for universities to acquire state-of-the-art designs to test new tools and techniques. This has meant that major new innovations required to handle complex designs were not done, and work was concentrated on incremental improvements of existing techniques that could be demonstrated on the small benchmarks that were available.
In 1984, Franc Brglez and Hideo Fujiwara helped initiate the collection of a set of combinational circuit benchmarks. These benchmarks and their preliminary results in automatic test pattern generation (ATPG). were presented at the 1985 International Symposium on Circuits and Systems (ISCAS). 1 These circuits, known now as the ISCAS'85 benchmarks, and their successors, were made widely available in a simple-to-translate net list format, and are being used to compare the results of design for testability (DFT) tools even today, 15 years later. More information on these and subsequent Association for Computing Machinery, Design Automation Special Interest Group (ACM/SIGDA) benchmarks can be found in the summary of benchmarking efforts in this issue.
The impact of these benchmarks was very great. There was a dramatic increase in work on test generation thanks to the availability of example circuits and the desire to improve upon the results of other researchers. The publication of the sequential circuit benchmarks in 1989 led to a similar increase in work on sequential ATPG.
As the 1990s progressed, industrial designs exploded in complexity. However, though new benchmarks were released, they did not keep up with this increase. Industrial EDA developers usually had access to large designs through nondisclosure agreements with customers, and commercial tools and research kept pace with greater transistor counts, higher speeds, and more complex structures. It was much harder for universities to do so, though some tried to address this problem by forging close ties to industry. Even when this was successful, full information on the designs used as test cases could not be published, and the unavailability of the test cases meant that others could not reproduce the work.
By the late 1990s the gap between academic and industrial design and test research was becoming very evident, and several initiatives were begun. The next article in this issue presents the background and status of several of these efforts.
The rest of the issue consists of articles on benchmarking issues and results. The first piece, by Kapur, Hay, and Williams, discusses the first sense of benchmarking, that of measuring performance. They propose a new metric for comparing DFT tools.
Gorla, Moser, Nebel, and Villar next illustrate the use of a benchmark for system specifications. The authors illustrate the use of a purely behavioral description to explore the properties and performance of several important specification and modeling languages.
The next three articles describe experiences with the recently released ITC'99 test benchmarks. Aktouf, Fleury, and Robach describe a tool for the insertion of scan logic into behavioral level VHSIC hardward description language (VHDL) code and results from using this tool on the Torino benchmark set. Next, Corno, Sonza Reorda, and Squillero from the Politecnico di Torino describe the Torino designs and results of running a high-level ATPG tool on them. Basto describes test results for three other members of the ITC'99 benchmarks, an interesting combinational circuit, a piece of a larger design, and an application-specific integrated circuit (ASIC), all available as gate-level Verilog netlists.
Finally, Dey, Sanchez, Panigrahi, Chen, and Taylor describe their experience in using a commercial soft microprocessor core for which the register transfer level (RTL) design is available. It is clear that using state-of-the-art designs for benchmarking is far more difficult than using simple examples.
The Last Byte column by Brglez summarizes the features of a new, truly scientific methodology for design and test tool benchmarking. Test cases for a particular problem domain are generated using controlled variations of reference circuits, while classical design of experiment techniques and statistical analysis are used to present and compare results.
As you read the articles in this issue, please remember that the effort to obtain new benchmarks is not complete, and in fact never will be. All too soon even the most complex current benchmarks will seem trivial. We need a process to continually obtain examples of state-of-the-art designs that can be distributed to all those who wish to develop significantly new design and test techniques. We need to extend the current benchmarks so that they will be available at all levels of the design hierarchy, from behavioral level down to layout. We need to develop a freely available library with timing and layout information to allow comparison of design tools. We need to develop fault and defect lists for the designs to stress DFT, test generation, and fault simulation tools. Interesting parts of designs should be extracted to serve as sub-benchmarks, so that researchers do not have to handle all of a design to make progress.
Finally, we should try to support open-source design and test tools. Most of an EDA tool consists of reading and writing design databases and maintaining internal circuit databases. Only a small part actually implements the core algorithm. Open-source tools will allow researchers to focus on the high-value parts of a new tool, without each university having to reinvent parsers and flatteners.
Working with realistic, state-of-the-art benchmarks will be much more challenging than dealing with small combinational circuits; but the benefits, both in terms of useful results and in well-trained students, will be great.

Reference

Scott Davidson is manager of the DFT Technology Group at Sun Microsystems. He received a BS from MIT, an MS from the University of Illinois at Urbana-Champaign, and a PhD from the University of Southwestern Louisiana, all in Computer Science. He is a member of the Design and Test Editorial Board, editor of The Last Byte column, a member of the International Test Conference Program Committee, vice-Chair of the International Test Synthesis Workshop, and organizer of the ITC'99 Benchmark Initiative.

Justin Harlow is Director of Integrated Circuits and Systems Sciences at the Semiconductor Research Corporation. He holds the MSEE from Duke University, MBA from Florida Institute of Technology, and BSEE from the University of Florida. He is currently pursuing a PhD in Electrical and Computer Engineering at Duke University, and is determined to graduate prior to retirement. His research interests include digital design and CAD, test, and benchmarking. He serves as Benchmarks Chair on the IEEE Design Automation Technical Council, and is a Senior Member of IEEE.
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool