The Community for Technology Leaders
RSS Icon
Subscribe
Austin, Texas
May 31, 1999 to June 4, 1999
ISBN: 0-7695-0228-8
pp: 0273
Michael Dahlin , University of Texas at Austin
Harrick M. Vin , University of Texas at Austin
Jonathan S. Kay , Cephalapod Proliferationists, Inc.
ABSTRACT
In this paper, we describe the design and implementation of an integrated architecture for cache systems that scale to hundreds or thousands of caches with thousands to millions of users. Rather than simply try to maximize hit rates, we take an end-to-end approach to improving response time by also considering hit times and miss times. We begin by studying several Internet caches and workloads, and we derive three core design principles for large scale distributed caches: (1) minimize the number of hops to locate and access data on both hits and misses, (2) share data among many users and scale to many caches, and (3) cache data close to clients. Our strategies for addressing these issues are built around a scalable, high-performance data-location service that tracks where objects are replicated. We describe how to construct such a service and how to use this service to provide direct access to remote data and push-based data replication. We evaluate our system through trace-driven simulation and find that these strategies together provide response time speedups of 1.27 to 2.43 compared to a traditional three-level cache hierarchy for a range of trace workloads and simulated environments.
INDEX TERMS
caching, cooperative caching, WWW, hierarchical caching, hint
CITATION
Renu Tewari, Michael Dahlin, Harrick M. Vin, Jonathan S. Kay, "Design Considerations for Distributed Caching on the Internet", ICDCS, 1999, 2013 IEEE 33rd International Conference on Distributed Computing Systems, 2013 IEEE 33rd International Conference on Distributed Computing Systems 1999, pp. 0273, doi:10.1109/ICDCS.1999.776529
58 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool