The Community for Technology Leaders
Cluster Computing and the Grid, IEEE International Symposium on (2011)
Newport Beach, California USA
May 23, 2011 to May 26, 2011
ISBN: 978-0-7695-4395-6
pp: 73-83
ABSTRACT
Collective communication operations, used by many scientific applications, tend to limit overall parallel application performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing collective communication management overhead. We have implemented several versions of the Message Passing Interface (MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49, 152 processes on a Cray XT5, and a small InfiniBand based cluster. At 49, 152 processes our barrier implementation outperforms the optimized native implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.
INDEX TERMS
Framework, Collectives, Hierarchy
CITATION
Pavel Shamis, Manjunath Gorentla Venkata, Vasily Filipov, Gilad Shainer, Joshua Ladd, Ishai Rabinovitz, Richard Graham, "Cheetah: A Framework for Scalable Hierarchical Collective Operations", Cluster Computing and the Grid, IEEE International Symposium on, vol. 00, no. , pp. 73-83, 2011, doi:10.1109/CCGrid.2011.42
93 ms
(Ver 3.3 (11022016))