The Community for Technology Leaders
2014 9th IEEE International Conference on Networking, Architecture, and Storage (NAS) (2014)
China
Aug. 6, 2014 to Aug. 8, 2014
ISBN: 978-1-4799-4087-5
pp: 43-52
ABSTRACT
This paper is motivated by our three key observations: (1) there exists a degradation of performance as the interleaved accesses of heterogeneous streams, (2) for the slow stream, sequential accesses suffer huge misses in the prefetching cache, (3) in concurrence paradigm, providing fairness and QoS to concurrent streams is very important which always ignored by the traditional prefetching algorithms. Therefore, we present Fema, a caching management algorithm that enforces the fairness and efficiency for concurrent heterogeneous streams. Fema focuses on three key designs: (1) An adaptive framework (Fema Ada) for prefetching. In the Fema Ada, we propose a rate-aware adjustment of prefetching degree and analysis the optimal partition size. (2) A novel replacement scheme (Fema Rep) in which the accessed data will be firstly evicted to improve the performance. (3) A round robin allocation scheme (Fema Rou) to achieve fairness while as least performance degradation as possible. Results show that Fema is able to achieve averages 81.4% performance improvement over the LRU algorithm, 53.5% over the default Linux Kernel prefetching (LKP) algorithm and 19.0% over the recently proposed practical AMP (adaptive multi-stream prefetching) algorithm. Fema achieves average 74.2% fairness improvement (metric in fair speedup) over the LKP algorithm and 56.5% over the AMP algorithm.
INDEX TERMS
Prefetching, Partitioning algorithms, Degradation, Resource management, Algorithm design and analysis, Performance evaluation, Equations
CITATION

Y. Li, D. Feng, L. Zeng and Z. Shi, "Fema: A Fairness and Efficiency Caching Management Algorithm in Shared Cache," 2014 9th IEEE International Conference on Networking, Architecture, and Storage (NAS), China, 2014, pp. 43-52.
doi:10.1109/NAS.2014.14
93 ms
(Ver 3.3 (11022016))