The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—We consider a number of cache/memory hierarchy design issues in systems with compressed random access memories (C-RAMs) in which compression and decompression occur automatically to and from main memory. Using a C-RAM as main memory, the bulk of main memory contents are stored in a compressed format and dynamically decompressed to handle cache misses at the next higher level of memory. This is the general approach adopted in IBM's Memory Expansion Technology (MXT). The design of the main memory directory structures and storage allocation methods in such systems is described elsewhere; here, we focus on issues related to cache-memory interfaces. In particular, if the cache line size (of the cache or caches to which main memory data is transferred) is different than the size of the unit of compression in main memory, bandwidth and latency problems can occur. Another issue is that of guaranteed forward progress, that is, ensuring that modified lines can be written to the compressed main memory so that the system can continue operation even if overall compression deteriorates. We study several approaches for solving these problems, using trace-driven analysis to evaluate alternatives.</p>
Memory system design, cache design, memory compression, performance analysis, trace-driven simulation.

P. A. Franaszek, J. T. Robinson and C. D. Benveniste, "Cache-Memory Interfaces in Compressed Memory Systems," in IEEE Transactions on Computers, vol. 50, no. , pp. 1106-1116, 2001.
98 ms
(Ver 3.3 (11022016))