The Community for Technology Leaders
2008 37th International Conference on Parallel Processing (2008)
Sept. 9, 2008 to Sept. 11, 2008
ISSN: 0190-3918
ISBN: 978-0-7695-3374-2
pp: 478-486
The mismatch between processor and memory speed continues to make design issues for memory hierarchies important. While larger cache blocks can exploit more spatial locality, they increase the off-chip memory bandwidth; a scarce resource in future microprocessor designs. We show that it is possible to use larger block sizes without increasing the off-chip memory bandwidth by applying compression techniques to cache/memory block transfers. Since bandwidth is reduced by up to a factor of three, we propose to use larger blocks.??While compression/decompression ends up on the critical memory access path, we find that its negative impact on the memory access latency time is often dwarfed by the performance gains from larger block sizes. Our proposed scheme uses a previous mechanism for dynamically choosing a larger cache block when advantageous given the spatial locality incombination with compression. This combined scheme consistently improves performance on average by 19%.
Computer architecture, data link compression, performance

M. Thuresson and P. Stenstrom, "Accommodation of the Bandwidth of Large Cache Blocks Using Cache/Memory Link Compression," 2008 37th International Conference on Parallel Processing(ICPP), vol. 00, no. , pp. 478-486, 2008.
88 ms
(Ver 3.3 (11022016))