The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—Scalable shared-memory multiprocessors are often slowed down by long-latency memory accesses. One way to cope with this problem is to use data forwarding to overlap memory accesses with computation. With data forwarding, when a processor produces a datum, in addition to updating its cache, it sends a copy of the datum to the caches of the processors that the compiler identified as consumers of it. As a result, when the consumer processors access the datum, they find it in their caches.</p><p>This paper addresses two main issues. First, it presents a framework for a compiler algorithm for forwarding. Second, using address traces, it evaluates the performance impact of different levels of support for forwarding. Our simulations of a 32-processor machine show that an optimistic support for forwarding speeds up five applications by an average of 50% for large caches and 30% for small caches. For large caches, most sharing read misses are eliminated, while for small caches, forwarding does not increase the number of conflict misses significantly. Overall, support for forwarding in shared-memory multiprocessors promises to deliver good application speedups.</p>
Memory latency hiding, forwarding and prefetching, multiprocessor caches, scalable shared-memory multiprocessors, address trace analysis.

D. A. Koufaty, X. Chen, D. K. Poulsen and J. Torrellas, "Data Forwarding in Scalable Shared-Memory Multiprocessors," in IEEE Transactions on Parallel & Distributed Systems, vol. 7, no. , pp. 1250-1264, 1996.
91 ms
(Ver 3.3 (11022016))