Issue No.05 - May (1999 vol.10)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/71.770197
<p><b>Abstract</b>—Presently, <it>massively parallel processors</it> (MPPs) are available only in a few commercial models. A sequence of three ASCI Teraflops MPPs has appeared before the new millennium. This paper evaluates six MPP systems through STAP benchmark experiments. The STAP is a radar signal processing benchmark which exploits regularly structured SPMD data parallelism. We reveal the resource scaling effects on MPP performance along orthogonal dimensions <it>of machine size, processor speed, memory capacity, messaging latency,</it> and <it>network bandwidth.</it> We show how to achieve balanced resources scaling against enlarged workload (problem size). Among three commercial MPPs, the IBM SP2 shows the highest speed and efficiency, attributed to its well-designed network with middleware support for single system image. The Cray T3D demonstrates a high network bandwidth with a good NUMA memory hierarchy. The Intel Paragon trails far behind due to slow processors used and excessive latency experienced in passing messages. Our analysis projects the lowest STAP speed on the ASCI Red, compared with the projected speed of two ASCI Blue machines. This is attributed to slow processors used in ASCI Red and the mismatch between its hardware and software. The Blue Pacific shows the highest potential to deliver scalable performance up to thousands of nodes. The Blue Mountain is designed to have the highest network bandwidth. Our results suggest a limit on the scalability of the <it>distributed shared-memory</it> (DSM) architecture adopted in Blue Mountain. The scaling model offers a quantitative method to match resource scaling with problem scaling to yield a truly scalable performance. The model helps MPP designers optimize the processors, memory, network, and I/O subsystems of an MPP. For MPP users, the scaling results can be applied to partition a large workload for SPMD execution or to minimize the software overhead in collective communication or remote memory update operations. Finally, our scaling model is assessed to evaluate MPPs with benchmarks other than STAP.</p>
Massively parallel processors, SPMD parallelism, ASCI program, STAP benchmark, phase-parallel model, latency and bandwidth, scalability analysis, supercomputer performance.
Kai Hwang, Choming Wang, Cho-Li Wang, Zhiwei Xu, "Resource Scaling Effects on MPP Performance: The STAP Benchmark Implications", IEEE Transactions on Parallel & Distributed Systems, vol.10, no. 5, pp. 509-527, May 1999, doi:10.1109/71.770197