The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—In this paper, the impact of memory management policies and switch design alternatives on the application performance of cache-coherent nonuniform memory access (CC-NUMA) multiprocessors is studied in detail. Memory management plays an important role in determining the performance of NUMA multiprocessors by dictating the placement of data among the distributed memory modules. We analyze memory traces of several scientific applications for three different memory management techniques, namely buddy, round-robin, and first-touch policies, and compare their memory system performance. Interconnection network switch designs that consider virtual channels and varying number of input buffers per switch are presented. Our performance evaluation is based on execution-driven simulation methodology to capture the dynamic changes in the network traffic during execution of the applications. It is shown that the use of cut-through switching with buffers and virtual channels can improve the average message latency tremendously. However, the choice of memory management policy affects the amount of network traffic and the network access pattern. Thus, we vary the memory management policy and confirm the performance benefits of improved switch designs. Results of sensitivity studies by varying switch design parameters, cache block size, and memory page size are also presented. We find that a combination of first-touch memory management policy and a switch design with virtual channels and increased buffer space can reduce the average message latency by as high as 70 percent.</p>
Memory management, switch design, wormhole routing, execution-driven simulation, scientific applications, shared-memory multiprocessor.

A. Kumar, H. Wang, L. N. Bhuyan and R. Iyer, "Impact of CC-NUMA Memory Management Policies on the Application Performance of Multistage Switching Networks," in IEEE Transactions on Parallel & Distributed Systems, vol. 11, no. , pp. 230-246, 2000.
82 ms
(Ver 3.3 (11022016))