The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - May (1993 vol.4)
pp: 520-534
ABSTRACT
<p>Multis, shared-memory multiprocessors that are implemented with single buses andsnooping cache protocols are inherently limited to a small number of processors, and, assystems grow beyond a single bus, the bandwidth requirements of broadcast operationslimit scalability. Hardware support to provide cache coherence without the use ofbroadcast can become very expensive. An approach to maintaining coherence usingapproximate information held in special-purpose caches called pruning-caches thatprovides robust performance over a wide range of workloads is presented. Thepruning-cache approach is compared to the more conventional inclusion cache forproviding multilevel inclusion (MLI) in the cache hierarchy. It is shown thatpruning-caches are more cost-effective and more robust. Using both analysis andsimulation, it is also shown that the k-ary n-cube topology provides scalable,bottleneck-free communication for uniform, point-to-point traffic.</p>
INDEX TERMS
Index Termspruning-cache directories; large-scale multiprocessors; shared-memory multiprocessors;multilevel inclusion; n-cube topology; bottleneck-free communication; buffer storage;memory architecture; multiprocessor interconnection networks; shared memory systems;storage management
CITATION
S.L. Scott, J.R. Goodman, "Performance of Pruning-Cache Directories for Large-Scale Multiprocessors", IEEE Transactions on Parallel & Distributed Systems, vol.4, no. 5, pp. 520-534, May 1993, doi:10.1109/71.224215
19 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool