2012 21st International Conference on Parallel Architectures and Compilation Techniques (PACT) (2012)
Minneapolis, MN, USA
Sept. 19, 2012 to Sept. 23, 2012
DOI Bookmark: http://doi.ieeecomputersociety.org/
JongHyuk Lee , University of Houston, TX 77004, USA
Ziyi Liu , University of Houston, TX 77004, USA
Xiaonan Tian , University of Houston, TX 77004, USA
Dong Hyuk Woo , Intel Labs, Santa Clara, CA 95054, USA
Weidong Shi , University of Houston, TX 77004, USA
Dainis Boumber , University of Houston, TX 77004, USA
In this paper, we present a novel approach of using the integrated GPU to accelerate conventional operations that are normally performed by the CPUs, the bulk memory operations, such as memcpy or memset. Offloading the bulk memory operations to the GPU has many advantages, i) the throughput driven GPU outperforms the CPU on the bulk memory operations; ii) for on-die GPU with unified cache between the GPU and the CPU, the GPU private caches can be leveraged by the CPU for storing moved data and reducing the CPU cache bottleneck; iii) with additional lightweight hardware, asynchronous offload can be supported as well; and iv) different from the prior arts using dedicated hardware copy engines (e.g., DMA), our approach leverages the exiting GPU hardware resources as much as possible. The performance results based on our solution showed that offloaded bulk memory operations outperform CPU up to 4.3 times in micro benchmarks while still using less resources. Using eight real world applications and a cycle based full system simulation environment, the results showed 30% speedup for five, more than 20% speedup for two of the eight applications.
Graphics processing units, Central Processing Unit, Multicore processing, Throughput, Out of order
J. Lee, Z. Liu, X. Tian, D. H. Woo, W. Shi and D. Boumber, "Acceleration of bulk memory operations in a heterogeneous multicore architecture," 2012 21st International Conference on Parallel Architectures and Compilation Techniques (PACT), Minneapolis, MN, USA, 2012, pp. 423-424.