The Community for Technology Leaders
Green Image
Issue No. 04 - April (2011 vol. 22)
ISSN: 1045-9219
pp: 608-620
Naga K. Govindaraju , Microsoft Corp., Redmond
Bingsheng He , Nanyang Technological University, Singapore
Qiong Luo , Hong Kong University of Science and Technology, Hong Kong
Wenbin Fang , University of Wisconsin-Madison, Madison
ABSTRACT
We design and implement Mars, a MapReduce runtime system accelerated with graphics processing units (GPUs). MapReduce is a simple and flexible parallel programming paradigm originally proposed by Google, for the ease of large-scale data processing on thousands of CPUs. Compared with CPUs, GPUs have an order of magnitude higher computation power and memory bandwidth. However, GPUs are designed as special-purpose coprocessors and their programming interfaces are less familiar than those on the CPUs to MapReduce programmers. To harness GPUs' power for MapReduce, we developed Mars to run on NVIDIA GPUs, AMD GPUs as well as multicore CPUs. Furthermore, we integrated Mars into Hadoop, an open-source CPU-based MapReduce system. Mars hides the programming complexity of GPUs behind the simple and familiar MapReduce interface, and automatically manages task partitioning, data distribution, and parallelization on the processors. We have implemented six representative applications on Mars and evaluated their performance on PCs equipped with GPUs as well as multicore CPUs. The experimental results show that, the GPU-CPU coprocessing of Mars on an NVIDIA GTX280 GPU and an Intel quad-core CPU outperformed Phoenix, the state-of-the-art MapReduce on the multicore CPU with a speedup of up to 72 times and 24 times on average, depending on the applications. Additionally, integrating Mars into Hadoop enabled GPU acceleration for a network of PCs.
INDEX TERMS
MapReduce, graphics processor, parallel computing, multicore processor, many-core architecture.
CITATION
Naga K. Govindaraju, Bingsheng He, Qiong Luo, Wenbin Fang, "Mars: Accelerating MapReduce with Graphics Processors", IEEE Transactions on Parallel & Distributed Systems, vol. 22, no. , pp. 608-620, April 2011, doi:10.1109/TPDS.2010.158
107 ms
(Ver )