The Community for Technology Leaders
2014 IEEE International Parallel & Distributed Processing Symposium Workshops (IPDPSW) (2014)
Phoenix, AZ, USA
May 19, 2014 to May 23, 2014
ISBN: 978-1-4799-4117-9
pp: 976-983
ABSTRACT
Due to their massive parallelism and high performance per watt GPUs gain high popularity in high performance computing and are a strong candidate for future exacscale systems. But communication and data transfer in GPU accelerated systems remain a challenging problem. Since the GPU normally is not able to control a network device, today a hybrid-programming model is preferred, whereby the GPU is used for calculation and the CPU handles the communication. As a result, communication between distributed GPUs suffers from unnecessary overhead, introduced by switching control flow from GPUs to CPUs and vice versa. In this work, we modify user space libraries and device drivers of GPUs and the Infiniband network device in a way to enable the GPU to control an Infiniband network device to independently source and sink communication requests without any involvements of the CPU. Our performance analysis shows the differences to hybrid communication models in detail, in particular that the CPU's advantage in generating work requests outshines the overhead associated with context switching. In other terms, our results show that complex networking protocols like IBVERBS are better handled by CPUs in spite of time penalties due to context switching, since overhead of work request generation cannot be parallelized and is not suitable with the high parallel programming model of GPUs.
INDEX TERMS
Graphics processing units, Data transfer, Registers, Performance evaluation, Context, Libraries, Instruction sets
CITATION

L. Oden, H. Froning and F. Pfreundt, "Infiniband-Verbs on GPU: A Case Study of Controlling an Infiniband Network Device from the GPU," 2014 IEEE International Parallel & Distributed Processing Symposium Workshops (IPDPSW), Phoenix, AZ, USA, 2014, pp. 976-983.
doi:10.1109/IPDPSW.2014.111
99 ms
(Ver 3.3 (11022016))