The Community for Technology Leaders
16th Euromicro Conference on Parallel, Distributed and Network-Based Processing (PDP 2008) (2009)
Weimar, Germany
Feb. 18, 2009 to Feb. 20, 2009
ISSN: 1066-6192
ISBN: 978-0-7695-3544-9
pp: 427-436
ABSTRACT
Today most systems in high-performance computing (HPC) feature a hierarchical hardware design: Shared memory nodes with several multi-core CPUs are connected via a network infrastructure. Parallel programming must combine distributed memory parallelization on the node interconnect with shared memory parallelization inside each node. We describe potentials and challenges of the dominant programming models on hierarchically structured hardware: Pure MPI (message passing interface), pure OpenMP (with distributed shared memory extensions) and hybrid MPI+OpenMP in several flavors. We pinpoint cases where a hybrid programming model can indeed be the superior solution because of reduced communication needs and memory consumption, or improved load balance. Furthermore we show that machine topology has a significant impact on performance for all parallelization strategies and that topology awareness should be built into all applications in the future. Finally we give an outlook on possible standardization goals and extensions that could make hybrid programming easier to do with performance in mind.
INDEX TERMS
application program interfaces, distributed memory systems, message passing, parallel programming, resource allocation, shared memory systems
CITATION

R. Rabenseifner, G. Hager and G. Jost, "Hybrid MPI/OpenMP Parallel Programming on Clusters of Multi-Core SMP Nodes," 16th Euromicro Conference on Parallel, Distributed and Network-Based Processing (PDP 2008)(PDP), Weimar, Germany, 2009, pp. 427-436.
doi:10.1109/PDP.2009.43
86 ms
(Ver 3.3 (11022016))