The Community for Technology Leaders
RSS Icon
Subscribe
Rome
May 23, 2009 to May 29, 2009
ISBN: 978-1-4244-3751-1
pp: 1-11
Yu Chen , Department of Computer Science and Technology, Tsinghua University, Beijing, China
Wenlong Li , Microprocessor Technology Lab, Intel Corp, China
Changkyu Kim , Microprocessor Technology Lab, Intel Corp, China
Zhizhong Tang , Department of Computer Science and Technology, Tsinghua University, Beijing, China
ABSTRACT
Multi-core processors with shared caches are now commonplace. However, prior works on shared cache management primarily focused on multi-programmed workloads. These schemes consider how to partition the cache space given that simultaneously-running applications may have different cache behaviors. In this paper, we examine policies for managing shared caches for running single multi-threaded applications. First, we show that the shared-cache miss rate can be significantly reduced by reserving a certain amount of space for shared data. Therefore, we modify the replacement policy to dynamically partition each set between shared and private data. Second, we modify the insertion policy to prevent streaming data (data not reused before eviction) from promoting to the MRU position. Finally, we use a low-overhead sampling mechanism to dynamically select the optimal policy. Compared to LRU policy, our scheme reduces the miss rate on average by 8.7% on 8MB caches and 20.1% on 16MB caches respectively.
CITATION
Yu Chen, Wenlong Li, Changkyu Kim, Zhizhong Tang, "Efficient shared cache management through sharing-aware replacement and streaming-aware insertion policy", IPDPS, 2009, 2009 IEEE International Symposium on Parallel & Distributed Processing (IPDPS), 2009 IEEE International Symposium on Parallel & Distributed Processing (IPDPS) 2009, pp. 1-11, doi:10.1109/IPDPS.2009.5161016
20 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool