The Community for Technology Leaders
2013 42nd International Conference on Parallel Processing (2007)
Xi'an, China
Sept. 10, 2007 to Sept. 14, 2007
ISSN: 0190-3918
ISBN: 0-7695-2933-X
pp: 46
S. Narravula , The Ohio State University, USA
A. Mamidala , The Ohio State University, USA
D.K. Panda , The Ohio State University, USA
G. Santhanaraman , The Ohio State University, USA
A. Vishnu , The Ohio State University, USA
Modern interconnects and corresponding high performance MPIs have been feeding the surge in the popularity of compute clusters and computing applications. Recently with the introduction of the iWARP (Internet Wide Area RDMA Protocol) standard, RDMA and zero-copy data transfer capabilities have been introduced and standardized for Ethernet networks. While traditional Ethernet networks had largely been limited to the traditional kernel based TCP/IP stacks and hence their limitations, iWARP capabilities of the newer GigE and 10 GigE adapters have broken this barrier and thereby exposing the available potential performance. <p>In order to enable applications to harness the performance benefits of iWARP and to study the quantitative extent of such improvements, we present MPI-iWARP, a high performance MPI implementation over the Open Fabrics verbs. Our preliminary results with Chelsio T3B adapters show an improvement of up to 37% in bandwidth, 75% in latency and 80% in MPI allreduce as compared to MPICH2 over TCP/IP. To the best of our knowledge, this is the first design, implementation and evaluation of a high performance MPI over the iWARP standard.</p>
S. Narravula, A. Mamidala, D.K. Panda, G. Santhanaraman, A. Vishnu, "High Performance MPI over iWARP: Early Experiences", 2013 42nd International Conference on Parallel Processing, vol. 00, no. , pp. 46, 2007, doi:10.1109/ICPP.2007.46
89 ms
(Ver 3.3 (11022016))