Efficient shared cache management through sharing-aware replacement and streaming-aware insertion policy
May 23, 2009 to May 29, 2009
Yu Chen , Department of Computer Science and Technology, Tsinghua University, Beijing, China
Wenlong Li , Microprocessor Technology Lab, Intel Corp, China
Changkyu Kim , Microprocessor Technology Lab, Intel Corp, China
Zhizhong Tang , Department of Computer Science and Technology, Tsinghua University, Beijing, China
Multi-core processors with shared caches are now commonplace. However, prior works on shared cache management primarily focused on multi-programmed workloads. These schemes consider how to partition the cache space given that simultaneously-running applications may have different cache behaviors. In this paper, we examine policies for managing shared caches for running single multi-threaded applications. First, we show that the shared-cache miss rate can be significantly reduced by reserving a certain amount of space for shared data. Therefore, we modify the replacement policy to dynamically partition each set between shared and private data. Second, we modify the insertion policy to prevent streaming data (data not reused before eviction) from promoting to the MRU position. Finally, we use a low-overhead sampling mechanism to dynamically select the optimal policy. Compared to LRU policy, our scheme reduces the miss rate on average by 8.7% on 8MB caches and 20.1% on 16MB caches respectively.
Yu Chen, Wenlong Li, Changkyu Kim, Zhizhong Tang, "Efficient shared cache management through sharing-aware replacement and streaming-aware insertion policy", IPDPS, 2009, Parallel and Distributed Processing Symposium, International, Parallel and Distributed Processing Symposium, International 2009, pp. 1-11, doi:10.1109/IPDPS.2009.5161016