Issue No. 04 - April (1998 vol. 47)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/12.675713
<p><b>Abstract</b>—As the Web continues to explode in size, caching becomes increasingly important. With caching comes the problem of cache consistency. Conventional wisdom holds that strong cache consistency is too expensive for the Web, and weak consistency methods, such as Time-To-Live (TTL), are most appropriate. This study compares three consistency approaches: <it>adaptive TTL</it>, <it>polling-every-time</it> and <it>invalidation</it>, through analysis, implementation, and trace replay in a simulated environment. Our analysis shows that weak consistency methods save network bandwidth mostly at the expense of returning stale documents to users. Our experiments show that <it>invalidation</it> generates a comparable amount of network traffic and server workload to <it>adaptive TTL</it> and has similar average client response times, while <it>polling-every-time</it> results in more control messages, higher server workload, and longer client response times. We show that, contrary to popular belief, strong cache consistency can be maintained for the Web with little or no extra cost than the current weak consistency approaches, and it should be maintained using an invalidation-based protocol.</p>
World Wide Web, cache consistency, invalidation protocols, distributed systems, performance analysis and measurements.
P. Cao and C. Liu, "Maintaining Strong Cache Consistency in the World Wide Web," in IEEE Transactions on Computers, vol. 47, no. , pp. 445-457, 1998.