, Politecnico di Torino
Pages: pp. 14-15
Moore's law keeps providing electronic designers with ever-growing computational power. The list of applications enabled by this phenomenal and unprecedented explosion in human capabilities is continuously expanding. The combination of these two factors, together with the associated economic effects of increasing nonrecurrent engineering costs and decreasing chip costs, is shaping the landscape of electronics. Problems that in the past were either too expensive to solve in silicon or required a whole electronic board can now be tackled with a single chip, one that might have tens of millions of logic gates and could include both analog and digital parts.
Design teams around the world are coping with the seemingly unmanageable problem of designing more complex circuits to solve a broader variety of tasks in a shrinking time to market and with growing safety concerns. You can easily imagine the consequences of a design mistake in an automotive electronics circuit, but even errors in microprocessor units used in non-safety-critical computers can be devastating in purely economic terms.
The integration of entire systems on a chip poses a host of problems to both architects and designers; the solutions to such problems are often very different from their board-level counterparts. This special issue of IEEE Micro includes a set of articles that explores several interesting aspects of system-on-a-chip (SOC) integration.
The first three articles discuss on-chip communication networks that differ greatly from the familiar board-level buses. The on-chip versions are actually more similar to telecommunications networks, with 2D topologies, packet-switching features, and error and multiclock-latency management facilities. The first article in this issue describes Chain, an on-chip packet-switched network architecture with a flexible topology and fairly radical protocol that is fully self-timed. Asynchronous logic eliminates the problem of distributing clocks throughout the chip, thus simplifying synchronization, reducing power consumption and electromagnetic interference, and increasing overall design modularity.
The second article takes a completely different approach to managing chip-level interconnect latencies, using the more familiar synchronous paradigm. Rather than requiring prediction and management of the multiclock cycle delays introduced by long, cross-chip global wires, this approach advocates a separation of concerns between computational cores and point-to-point communication structures. This method allows core use in a globally synchronous fashion, as long as its clock can be selectively enabled whenever enough data have arrived to perform the next computational step. Communication is buffered synchronously, with automated back-pressure mechanisms within simple relay stations.
The third article rises above low-level communication issues to propose a scalable chip-level interconnect strategy based on a 2D octagonal topology. This approach exhibits both low maximum latency and high bandwidth between any two nodes. Routing is also very fast, based on a simple deterministic algorithm.
The next two articles explore the new possibilities offered to chip-level architects by the wealth of available transistors and the strict limits on design time, suggesting a modular array-like structure.
The fourth article in this special issue describes Eclipse, a multiprocessor architecture including processing elements, communication elements, and a memory structure. Providing a programmer's model is the key problem to solve; an effective model will enable widespread use of such software-oriented multiprocessors. Ease of coding, compiling, and debugging is essential to let software developers—often trained in sequential programming—create a broad selection of efficient algorithm implementations for the new architecture. Thanks to a carefully designed on-chip interconnection scheme, this method's parallel, random-access machine model offers a synchronous-execution mechanism without the intricacies of the cache coherence problems in classical multiprocessor systems.
The fifth article exploits the availability of on-chip reconfigurable resources to provide a digital-signal-processing-oriented computing fabric that can efficiently implement algorithms with regular computational loads. The fabric's cellular structure provides ease of programming at a fine level of granularity.
On-chip communication and computation architectures are only some of the issue that SOC designers must tackle. Testing a huge number of gates at fabrication time is creating a serious bottleneck because of the huge cost of state-of-the-art testing machines and the ever-growing time each chip must spend on one of them. The sixth article in this issue thus discusses the efficient combination of multiple cores, each supporting some design-for-testability standard to control its own internal scan or built-in self-test (BIST) resources. The resulting hierarchical test access mechanism is efficient in terms of test execution time and has a low area overhead.
The seventh and last article deals with minimizing the amount of data brought outside the SOC at test time, especially in a BIST scenario. This approach reduces the time that the chip needs to spend on expensive testing machines, as long as the compacted test sequence preserves the fault detection capabilities of the original sequence. Effective compaction algorithms must, at all costs, avoid accepting a faulty circuit (a false-positive result) and minimize the number of good circuits rejected (false-negative results).
For readers interested in more coverage of SOCs, IEEE Design & Test will offer a special issue on platform-based SOC design in its Nov.-Dec. 2002 issue.