The Community for Technology Leaders
2018 IEEE International Symposium on Workload Characterization (IISWC) (2018)
Raleigh, NC, USA
Sept. 30, 2018 to Oct. 2, 2018
ISBN: 978-1-5386-6781-1
pp: 191-202
Ang Li , Pacific Northwest National Laboratory
Shuaiwen Leon Song , Pacific Northwest National Laboratory
Jieyang Chen , Pacific Northwest National Laboratory
Xu Liu , Pacific Northwest National Laboratory
Nathan Tallent , Pacific Northwest National Laboratory
Kevin Barker , Pacific Northwest National Laboratory
ABSTRACT
High performance multi-GPU computing becomes an inevitable trend due to the ever-increasing demand on computation capability in emerging domains such as deep learning, big data and planet-scale applications. However, the lack of deep understanding on how modern GPUs can be connected and the actual impact of state-of-the-art interconnect on multiGPU application performance becomes a hurdle. Additionally, the absence of a practical multi-GPU benchmark suite poses further obstacles for conducting research in multi-GPU era. In this paper, we fill the gap by proposing a multi-GPU benchmark suite named Tartan, which contains microbenchmarks, scale-up and scale-out applications. We then apply Tartan to evaluate the four latest types of modern GPU interconnects, i.e., PCI-e, NVLink-V1, NVLink-V2 and InfiniBand with GPUDirect-RDMA from two recently released NVIDIA super AI platforms as well as ORNL’s exascale prototype system. Based on empirical evaluation, we observe four new types of NUMA effects: three types are triggered by NVLink’s topology, connectivity and routing, while one type is caused by PCI-e (i.e., anti-locality). They are very important for performance tuning in multi-GPU environment. Our evaluation results show that, unless the current CPU-GPU master-slave programming model can be replaced, it is difficult for scale-up multi-GPU applications to really benefit from faster intra-node interconnects such as NVLinks; while for inter-node scale-out applications, although interconnect is more crucial to the overall performance, GPUDirect-RDMA appears to be not always the optimal choice. The Tartan benchmark suite including the microbenchmarks are opensource and available athttp://github.com/uuudown/Tartan.
INDEX TERMS
Graphics processing units, Bandwidth, Peer-to-peer computing, Benchmark testing, Network topology, Topology, Routing
CITATION

A. Li, S. L. Song, J. Chen, X. Liu, N. Tallent and K. Barker, "Tartan: Evaluating Modern GPU Interconnect via a Multi-GPU Benchmark Suite," 2018 IEEE International Symposium on Workload Characterization (IISWC), Raleigh, NC, USA, 2018, pp. 191-202.
doi:10.1109/IISWC.2018.8573483
205 ms
(Ver 3.3 (11022016))