Pages: pp. 4-5
Functional verification involves proving that an IC design works properly relative to specification, protocol, and functionality. It does not necessarily guarantee correct operation with respect to timing, power, noise, routability, and so forth. Other tools and methodologies address these important concerns.
For example, functional verification techniques can typically certify that
There are three main ways to functionally verify designs: simulation, emulation, and formal verification. This special issue focuses on formal verification. However, all three approaches provide a representation—in fact, many representations—of the design under verification. The DUV is usually expressed as a Verilog or VHDL (VHSIC hardware description language) design, or possibly a program in a higher-level language such as C, C++, or a variant like SystemC. DUVs are represented at several different abstraction levels: register transfer, gate, transistor, and circuit. This variety of models suggests that ICs require two basic qualities to verify:
As chip designers continually implement more cores, memories, buses, and peripherals on a chip, the complexity of ICs is being pushed beyond 10 million transistors. Functionally verifying a chip is consuming increasingly more design resources: people and CPU cycles. This special issue addresses this problem from the perspective of formal verification. Of course, describing all the tools and methodologies for functionally verifying chips is impossible to accomplish in one issue, so we highlight just a few important developments in this field.
First, Hewlett-Packard's Harry Foster describes how to use a Boolean equivalence checker to verify the equivalence of RTL and gate-level models. Equivalence-checking tools have become standard in the industry, and all major CAD vendors offer them. Today's tools focus on comparing RTL, gate-level, and switch-level models. They are based on various techniques such as binary decision diagrams, Boolean satisfaction algorithms, and automatic test pattern generation algorithms.
For the future, several university research projects aim at elevating the level of models handled by equivalence checkers, such as the work by Randy Bryant and his students at Carnegie Mellon University in comparing ISA models to RTL designs. Unfortunately, there is little work about how to formally compare abstract models in C or C++ to RTL models.
Two articles in this issue address the problem of property checking. The first, by Intel's Robert Jones et al., describes a methodology that combines symbolic trajectory evaluation (STE)—that is, symbolic simulation—with theorem proving to verify the correctness of arithmetic units. The authors have developed a highly sophisticated verification system and deployed it on several ICs. The second article, by Motorola's Narayanan Krishnamurthy et al., discusses the concerns regarding how to make STE usable in a commercial design environment. STE has successfully uncovered bugs in memory arrays of several PowerPC designs.
Because of this issue's focus on formal verification, it does not concentrate on simulation as a verification technique, though simulation is still by far the most prevalent method used in the industry. However, Serdar Tasiran from Compaq and Kurt Keutzer from Berkeley present a survey on the various ways to measure simulation coverage. It is interesting that there is no single best-known coverage measure. In fact, there is no known mathematical theory of simulation coverage. Nevertheless, this article discusses the various means in common use.
Another important area this issue does not focus on is model-checking and semiformal verification tools. Vendors have released several products, such as Verplex's BlackTie, 0-IN's Search, RealIntent's Verix, Averant's Solidify, and Cadence's FormalCheck; and Synopsys has announced an offering called Ketchum.
Verification has become a very exciting discipline, as increasingly more formal and semiformal tools and methodologies are becoming available.