2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA) (1999)
Jan. 9, 1999 to Jan. 12, 1999
Processor speeds are increasing rapidly, and memory speeds are not keeping up. Streaming computations (such as multi-media or scientific applications) are among those whose performance is most limited by the memory bottleneck. Rambus hopes to bridge the processor/memory performance gap with a recently introduced DRAM that can deliver up to 1.6Gbytes/sec. We analyze the performance of these interesting new memory devices on the inner loops of streaming computations, both for traditional memory controllers that treat all DRAM transactions as random cacheline accesses, and for controllers augmented with streaming hardware. For our benchmarks, we find that accessing unit-stride streams in cacheline bursts in the natural order of the computation exploits from 44-76% of the peak bandwidth of a memory system composed of a single Direct RDRAM device, and that accessing streams via a streaming mechanism with a simple access ordering scheme can improve performance by factors of 1.18 to 2.25.
Robert H. Klenke, Wm. A. Wulf, Sung I. Hong, Maximo H. Salinas, Sally A. McKee, James H. Aylor, "Access Order and Effective Bandwidth for Streams on a Direct Rambus Memory", 2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA), vol. 00, no. , pp. 80, 1999, doi:10.1109/HPCA.1999.744337