Issue No. 04 - July/August (2009 vol. 15)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TVCG.2008.188
Christoph Müller , Visualisierungsinstitut der Universität Stuttgart
Steffen Frey , Visualisierungsinstitut der Universität Stuttgart
Magnus Strengert , Visualisierungsinstitut der Universität Stuttgart
Carsten Dachsbacher , Visualisierungsinstitut der Universität Stuttgart
Thomas Ertl , Visualisierungsinstitut der Universität Stuttgart
We present a development environment for distributed GPU computing targeted for multi-GPU systems, as well as graphics clusters. Our system is based on CUDA and logically extends its parallel programming model for graphics processors to higher levels of parallelism, namely, the PCI bus and network interconnects. While the extended API mimics the full function set of current graphics hardware—including the concept of global memory—on all distribution layers, the underlying communication mechanisms are handled transparently for the application developer. To allow for high scalability, in particular for network-interconnected environments, we introduce an automatic GPU-accelerated scheduling mechanism that is aware of data locality. This way, the overall amount of transmitted data can be heavily reduced, which leads to better GPU utilization and faster execution. We evaluate the performance and scalability of our system for bus and especially network-level parallelism on typical multi-GPU systems and graphics clusters.
GPU computing, graphics clusters, parallel programming.
C. Dachsbacher, T. Ertl, C. Müller, M. Strengert and S. Frey, "A Compute Unified System Architecture for Graphics Clusters Incorporating Data Locality," in IEEE Transactions on Visualization & Computer Graphics, vol. 15, no. , pp. 605-617, 2008.