Issue No.01 - Jan. (2014 vol.25)
Albert-Jan Nicholas Yzelman , Flanders ExaScience Lab (Intel Labs Europe), Leuven
Dirk Roose , KU Leuven, Heverlee
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPDS.2013.31
The sparse matrix-vector multiplication is an important computational kernel, but is hard to efficiently execute even in the sequential case. The problems--namely low arithmetic intensity, inefficient cache use, and limited memory bandwidth--are magnified as the core count on shared-memory parallel architectures increases. Existing techniques are discussed in detail, and categorized chiefly based on their distribution types. Based on this, new parallelization techniques are proposed. The theoretical scalability and memory usage of the various strategies are analyzed, and experiments on multiple NUMA architectures confirm the validity of the results. One of the newly proposed methods attains the best average result in experiments on a large set of matrices. In one of the experiments it obtains a parallel efficiency of 90 percent, while on average it performs close to 60 percent.
Sparse matrices, Vectors, Kernel, Bandwidth, Indexes, Computer architecture, Particle separators,NUMA architectures, Sparse matrix-vector multiplication, shared-memory parallelism, cache-oblivious, sparse matrix partitioning, matrix reordering, Hilbert space-filling curve, high-performance computing
Albert-Jan Nicholas Yzelman, Dirk Roose, "High-Level Strategies for Parallel Shared-Memory Sparse Matrix-Vector Multiplication", IEEE Transactions on Parallel & Distributed Systems, vol.25, no. 1, pp. 116-125, Jan. 2014, doi:10.1109/TPDS.2013.31