This Article 
 Bibliographic References 
 Add to: 
Generalized Communicators in the Message Passing Interface
June 2001 (vol. 12 no. 6)
pp. 610-616

Abstract—We propose extensions to the Message Passing Interface (MPI) that generalize the MPI communicator concept to allow multiple communication endpoints per process, dynamic creation of endpoints, and the transfer of endpoints between processes. The generalized communicator construct can be used to express a wide range of interesting communication structures, including collective communication operations involving multiple threads per process, communications between dynamically created threads or processes, and object-oriented applications in which communications are directed to specific objects. Furthermore, this enriched functionality can be provided in a manner that preserves backward compatibility with MPI. We describe the proposed extensions, illustrate their use with examples, and describe a prototype implementation in the popular MPI implementation MPICH.

[1] W. Gropp, E. Lusk, and A. Skjellum, Using MPI: Portable Parallel Programming with the Message Passing Interface. MIT Press, 1994.
[2] M. Snir, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra, “MPI: The Complete Reference,” MIT Press,, 1995.
[3] J.M. Squyres, B.C. McCandless, and A. Lumsdaine, “Object Oriented MP: A Class Library for the Message Passing Interface,” Proc. Parallel Object-Oriented Methods and Applications Conf., , 1996.
[4] “MPI-2: Extensions to the Message-Passing Interface,” Message Passing Interface Forum, mpi-book.ps oompi/documentation.phphttp:/ , 1997.
[5] J.H. Reppy, “Concurrent ML: Design, Application, and Semantics,” Functional Programming, Concurrency, Simulation, and Automated Reasoning, 1993.
[6] I. Foster and K.M. Chandy, “Fortran M: A Language for Modular Parallel Programming,” J. Parallel and Distributed Computing, 1994.
[7] I. Foster and S. Taylor, “A Compiler Approach to Scalable Concurrent Program Design,” ACM Trans. Programming Languages and Systems, vol. 16, no. 3, pp. 577-604, 1994.
[8] I. Foster and S. Taylor, Strand: New Concepts in Parallel Programming. Prentice Hall, 1990.
[9] A. Skjellum, N. Doss, K. Viswanathan, A. Chowdappa, and P. Bangalore, “Extending the Message Passing Interface,” Proc. 1994 Scalable Parallel Libraries Conf., 1994.
[10] “Document for the Real-Time Message Passing Interface (MPI/RT-1. 0),” Real-Time Message Passing Interface Forum, http:/, 2000.
[11] I. Foster, C. Kesselman, and M. Snir, “Generalized Communicators in the Message Passing Interface,” Proc. 1996 MPI Developers Conf., pp. 42-49, 1996.
[12] M. Haines, P. Mehrotra, and D. Cronk, “Ropes: Support for Collective Operations among Distributed Threads,” Technical Report 95-36, Inst. for Computer Application in Science and Eng., 1995.
[13] W. Gropp, E. Lusk, N. Doss, and A. Skjellum, “A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard,” Parallel Computing, vol. 22, no. 6, pp. 789–828, 1996.
[14] I. Foster, C. Kesselman, and S. Tuecke, "The Nexus Approach to Integrating Multithreading and Communication," to be published in J. Parallel and Distributed Computing.
[15] I. Foster, J. Geisler, W. Gropp, N. Karonis, E. Lusk, G. Thiruvathukal, and S. Tuecke, “A Wide-Area Implementation of the Message Passing Interface,” Parallel Computing, vol. 24, no. 11, 1998.

Index Terms:
MPI, process spawning, multithreading, process names.
Erik D. Demaine, Ian Foster, Carl Kesselman, Marc Snir, "Generalized Communicators in the Message Passing Interface," IEEE Transactions on Parallel and Distributed Systems, vol. 12, no. 6, pp. 610-616, June 2001, doi:10.1109/71.932714
Usage of this product signifies your acceptance of the Terms of Use.