Issue No.01 - January-March (1999 vol.16)
Published by the IEEE Computer Society
Dynamic Random-access Memory
(DRAM) has played a central role as a driver of semiconductor technology over the past three decades. This role has diminished somewhat in recent years because of the diverging evolution of specialized logic and DRAM processes. Logic processes, optimized for high speed and high interconnectivity, provide low-threshold voltage transistors and four or more layers of metal wiring. DRAM processes, optimized for high cell array density and low cell leakage, provide relatively high threshold voltage transistors and rarely more than two layers of metal for interconnect. Despite these fundamental differences, DRAM technologies will continue to pioneer the smallest semiconductor feature sizes. DRAMs of the 256-Mbit and 1-Gbit generations require stable processes capable of producing parts with line widths no greater than 200 and 150 nm, respectively.
DRAMs will likely use more complex cell structures and new dielectric materials, posing additional challenges in process control and quality assurance. Novel chip-level organizations and high-speed memory bus protocols will emerge as architectural remedies to the growing disparity between processor instruction execution rates and memory random-access times. Designers will not make further advances in DRAM by extending known techniques at smaller feature sizes. Rather, the commercial success of gigabit parts and the proliferation of embedded DRAMs and other specialized memory types will depend on fundamental breakthroughs in design and test. This theme issue includes five articles that consider many of the key DRAM design and test challenges.
For various reasons, DRAM design and test expertise has remained the domain of relatively few specialists. Specialization in layout and circuit-level design, as well as in testing methodology, occurred very early during the emergence of DRAM technology. Much of a DRAM's internal operation must be treated in purely analog terms. For example, designers must cope with the challenge of recovering relatively low-level signals from densely packed arrays of storage cells, despite the presence of comparatively large levels of capacitively coupled noise. Thus, they must pay particular attention to both layout and circuit design. Cell structure, semiconductor technology, physical cell array layout, and sense amplifier design must be optimized together to achieve reliable data storage and sensing at the highest possible cell density. Decoupling physical layout design, circuit-level design, and logic-level design, which has been standard practice for digital systems, is not nearly as possible in DRAM design.
Built-in self-test (BIST) techniques have not gained wide acceptance in commodity DRAM designs. Design-for-testability enhancements for DRAM have typically been limited to parallel test modes and other specialized features for enhancing the observability and controllability of certain subsystems. However, the lengthy test times of the upcoming 256-Mbit and 1-Gbit parts are encouraging the development of BIST architectures, at least in research and preproduction parts. Figure 1 shows a 256-Mbit synchronous DRAM from Texas Instruments, which includes a mask-microprogrammable BIST circuit that implements 10 different test algorithms. 1 The BIST circuit, shown as two narrow blacked out regions between the two rightmost cell subarrays, occupies only 0.5% of the total chip area. With this modification, the designers estimate that 60% of the total test time can be transferred from an expensive tester to relatively inexpensive, BIST-controlled testing.
Several trends confront digital designers with barriers that impede increases in system performance. The memory subsystem is becoming increasingly important as the gap grows between rapidly improving microprocessor data rate requirements and more slowly increasing DRAM data bandwidth capabilities. To deal with the situation, researchers have proposed several high-performance standards, such as synchronous DRAM (SDRAM), double data rate SDRAM, synchronous link DRAM (SLDRAM), and the proprietary Rambus standard. In one of the articles in this special issue, Millar and Gillingham report on simulation results for two of the main competing technologies for next-generation, high-bandwidth memory subsystems.
Increasingly, large processor designs are integrating DRAM, along with SRAM cache, directly onto the same die as the processor to exploit high-bandwidth DRAM access modes. Multimedia and other special-purpose processors also integrate DRAM on chip to access wide data words, minimize power consumption, and reduce package count. Figure 2 shows a hard disk controller chip from Siemens AG that integrates 1.875 Mbits of DRAM (in the upper left corner) with a CPU core, a phase-locked loop, 1 Kbyte of embedded SRAM, 1 Kbyte of ROM, and synthesized glue logic. 2 Such designs commonly provide on-chip BIST to minimize test time.
At the process level, successfully integrating DRAM with logic usually implies the choice of implementing logic in a DRAM process or implementing DRAM in a logic process. In their article, Elliott, Stumm, Snelgrove, Cojocaru, and McKenzie pursue the first option, proposing a processor-in-memory architecture that supports a massively parallel, single-instruction, multiple-data (SIMD) programming model.
DRAM capacities will continue to increase as minimum feature sizes shrink and new storage cell technologies emerge. However, these improvements will come at a rapidly escalating processing cost. An alternative approach to increasing the density of both commodity and embedded DRAMs is to store multiple bits in each storage cell. The article by Redeker, Cockburn, and Elliott examines fault modeling and testing issues for a DRAM that stores 2 bits per cell.
Digital engineers have been shielded, by and large, from many of DRAM's inherent complexities. When logic chip designers have required embedded storage capability, they have usually relied on static RAM. SRAM technology, which uses static latch elements rather than capacitors for bit storage, is more compatible with conventional logic techniques than DRAM technology. SRAM's advantages come at the cost of reduced storage density, however, since DRAM arrays have at least four times the density of SRAM arrays with similar feature sizes. The desire for large embedded RAM capacities to implement single-chip systems has driven interest in designing and testing embedded DRAM. The article by Miyano, Sato, and Numata reports on the state of the art for embedded DRAMs with test support features.
An attractive test strategy for large memories is on-chip BIST. Thin profit margins in commodity DRAM have prevented widespread use of BIST, but this situation might change as storage capacities reach 1 Gbit. Embedded memory presents a severe testing challenge due to the large number of states that must be verified and the lack of direct controllability and observability of memory connections. Embedded SRAMs are commonly tested on chip with BIST circuits that apply standard memory test patterns. It is unlikely that SRAM BIST solutions will carry over completely unchanged to DRAM. Fundamental differences between the two memory technologies, such as the write-back requirements and subtle pattern sensitivity failure modes in DRAM, suggest that we will need new memory BIST techniques. The article by Huang, Huang, Wu, Wu, and Chang describes progress in designing embedded DRAM with programmable BIST.
Interested readers will find a thorough treatment of DRAM testing techniques in Testing Semiconductor Memories: Theory and Practice by A.J. van de Goor (John Wiley and Sons, 1991). Semiconductor Memories: Testing, Technology, and Reliability by A.K. Sharma (IEEE Press, 1997) covers both design and test of DRAM. Advanced research articles on DRAM appear in IEEE Journal of Solid-State Circuits, IEEE Transactions on VLSI Systems, IEEE Transactions on CAD of Integrated Circuits and Systems, IEEE Micro, and the Journal of Electronic Testing: Theory and Applications. IEEE conferences covering these topics include the International Test Conference, International Solid-State Circuits Conference, and International Workshop on Memory Technology, Design, and Testing.
Bruce F. Cockburn is an associate professor of computer engineering in the Department of Electrical and Computer Engineering at the University of Alberta, Edmonton, Canada. He is also a scientist at Telecommunications Research Laboratories in Edmonton. His primary research interests are memory technology, digital logic testing, and voice-band signal processing. Previously, he designed automatic test equipment and fault detection software for Mitel Corporation, Kanata, Ontario. Cockburn graduated from the Queen's University at Kingston with a BSc in engineering physics. He received MMath and PhD degrees in computer science from the University of Waterloo. He is a member of the IEEE and the Computer Society.
Fabrizio Lombardi heads the Department of Electrical and Computer Engineering of Northeastern University and holds its International Test Conference endowed professorship. His research interests are fault-tolerant computing, testing and design of digital systems, configurable computing, defect tolerance, and CAD VLSI. He was formerly a faculty member at Texas Tech University; the University of Colorado, Boulder; and Texas A&M University. He is an editor of IEEE Transactions on Computers and has guest-edited special issues of IEEE D&T, IEEE Micro, and IEEE Transactions on Computers. He received the Research Initiation Award from the IEEE/Engineering Foundation (1985-86), a Motorola Silver Quill Award (1996), and an International Research Award from the Japanese Ministry of Science and Education (1993-99). He was an IEEE Computer Society Distinguished Visitor during 1990-93. Lombardi graduated from the University of Essex, UK, with a BSc in electronic engineering. He received a master's degree in microwaves and modern optics and a diploma in microwave engineering from the Microwave Research Unit at University College, London. He received a PhD from the University of London. He is a member of the IEEE and the Computer Society.
Fred J. Meyer is an assistant professor in the Department of Electrical and Computer Engineering at Northeastern University. He was previously a senior lecturer in the Department of Computer Science at Texas A&M University. His research interests are distributed computer systems and algorithms, reliable and secure communication protocols, reliable system design and validation, and IC yield enhancement and assessment. He was formerly with the United States Air Force and Caelum, Inc. He has served as the publicity chair of the 1997 and 1998 IEEE International Workshops on Memory Technology, Design, and Testing. Meyer received a BSc and a PhD from the University of Massachusetts, Amherst. He is a member of IEEE and the Computer Society.