Issue No. 05 - May (1997 vol. 8)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/71.598277
<p><b>Abstract</b>—In this paper, we describe scalable parallel algorithms for symmetric sparse matrix factorization, analyze their performance and scalability, and present experimental results for up to 1,024 processors on a Cray T3D parallel computer. Through our analysis and experimental results, we demonstrate that our algorithms substantially improve the state of the art in parallel direct solution of sparse linear systems—both in terms of scalability and overall performance. It is a well known fact that dense matrix factorization scales well and can be implemented efficiently on parallel computers. In this paper, we present the first algorithms to factor a wide class of sparse matrices (including those arising from two- and three-dimensional finite element problems) that are asymptotically as scalable as dense matrix factorization algorithms on a variety of parallel architectures. Our algorithms incur less communication overhead and are more scalable than any previously known parallel formulation of sparse matrix factorization. Although, in this paper, we discuss Cholesky factorization of symmetric positive definite matrices, the algorithms can be adapted for solving sparse linear least squares problems and for Gaussian elimination of diagonally dominant matrices that are almost symmetric in structure. An implementation of one of our sparse Cholesky factorization algorithms delivers up to 20 GFlops on a Cray T3D for medium-size structural engineering and linear programming problems. To the best of our knowledge, this is the highest performance ever obtained for sparse Cholesky factorization on any supercomputer.</p>
Parallel processing, sparse matrices, Cholesky factorization, sparse linear systems, scalability analysis, high performance computing, parallel scientific computing.
G. Karypis, A. Gupta and V. Kumar, "Highly Scalable Parallel Algorithms for Sparse Matrix Factorization," in IEEE Transactions on Parallel & Distributed Systems, vol. 8, no. , pp. 502-520, 1997.