The Community for Technology Leaders
Green Image
Issue No. 08 - August (2011 vol. 22)
ISSN: 1045-9219
pp: 1284-1298
Alokika Dash , University of California, Irvine, Irvine
Brian Demsky , University of California, Irvine, Irvine
We present a distributed transactional memory system that exploits a new opportunity to automatically hide network latency by speculatively prefetching and caching objects. The system includes an object caching framework, language extensions to support our approach, and symbolic prefetches. To our knowledge, this is the first prefetching approach that can prefetch objects whose addresses have not been computed or predicted. Our approach makes aggressive use of both prefetching and caching of remote objects to hide network latency while relying on the transaction commit mechanism to preserve the simple transactional consistency model that we present to the developer. We have evaluated this approach on three distributed benchmarks, five scientific benchmarks, and several microbenchmarks. We have found that our approach enables our benchmark applications to effectively utilize multiple machines and benefit from prefetching and caching. We have observed a speedup of up to 7.26 {\times} for distributed applications on our system using prefetching and caching and a speedup of up to 5.55{\times} for parallel applications on our system.
Distributed shared memory, software transactional memory, prefetching.

B. Demsky and A. Dash, "Integrating Caching and Prefetching Mechanisms in a Distributed Transactional Memory," in IEEE Transactions on Parallel & Distributed Systems, vol. 22, no. , pp. 1284-1298, 2011.
94 ms
(Ver 3.3 (11022016))