The Community for Technology Leaders
Green Image
Issue No. 01 - Jan. (2014 vol. 25)
ISSN: 1045-9219
pp: 116-125
Albert-Jan Nicholas Yzelman , Flanders ExaScience Lab. (Intel Labs. Eur.), Leuven, Belgium
Dirk Roose , Dept. of Comput. Sci., KU Leuven, Heverlee, Belgium
The sparse matrix-vector multiplication is an important computational kernel, but is hard to efficiently execute even in the sequential case. The problems--namely low arithmetic intensity, inefficient cache use, and limited memory bandwidth--are magnified as the core count on shared-memory parallel architectures increases. Existing techniques are discussed in detail, and categorized chiefly based on their distribution types. Based on this, new parallelization techniques are proposed. The theoretical scalability and memory usage of the various strategies are analyzed, and experiments on multiple NUMA architectures confirm the validity of the results. One of the newly proposed methods attains the best average result in experiments on a large set of matrices. In one of the experiments it obtains a parallel efficiency of 90 percent, while on average it performs close to 60 percent.
Sparse matrices, Vectors, Kernel, Bandwidth, Indexes, Computer architecture, Particle separators,NUMA architectures, Sparse matrix-vector multiplication, shared-memory parallelism, cache-oblivious, sparse matrix partitioning, matrix reordering, Hilbert space-filling curve, high-performance computing
Albert-Jan Nicholas Yzelman, Dirk Roose, "High-Level Strategies for Parallel Shared-Memory Sparse Matrix-Vector Multiplication", IEEE Transactions on Parallel & Distributed Systems, vol. 25, no. , pp. 116-125, Jan. 2014, doi:10.1109/TPDS.2013.31
172 ms
(Ver 3.1 (10032016))