This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Automatic Data Structure Selection and Transformation for Sparse Matrix Computations
February 1996 (vol. 7 no. 2)
pp. 109-126

Abstract—The problem of compiler optimization of sparse codes is well known and no satisfactory solutions have been found yet. One of the major obstacles is formed by the fact that sparse programs explicitly deal with particular data structures selected for storing sparse matrices. This explicit data structure handling obscures the functionality of a code to such a degree that optimization of the code is prohibited, for instance, by the introduction of indirect addressing. The method presented in this paper delays data structure selection until the compile phase, thereby allowing the compiler to combine code optimization with explicit data structure selection. This method enables the compiler to generate efficient code for sparse computations. Moreover, the task of the programmer is greatly reduced in complexity.

[1] A.V. Aho, R. Sethi, and J.D. Ullman, Compilers, Principles, Techniques and Tools.New York: Addison-Wesley, 1985.
[2] V. Balasundaram, Interactive Parallelization of Numerical Scientific Programs. PhD thesis, Department of Computer Science, Rice Univ., 1989.
[3] V. Balasundaram, “A Mechanism for Keeping Useful Internal Information in Parallel Programming Tools: The Data Access Descriptor,” J. Parallel and Distributed Computing, vol. 9, pp. 154-170, 1990.
[4] U. Banerjee, "Unimodular transformations of double loops," Proc. Third Workshop on Languages and Compilers for Parallel Computing, 1990.
[5] U. Banerjee, Loop Transformations for Restructuring Compilers, Kluwer Academic Publishers, Boston, Mass., 1997.
[6] A.J.C. Bik, A prototype restructuring compiler. Master's thesis, Utrecht Univ., 1992. INF/SCR-92-11.
[7] A.J.C. Bik, P.M.W. Knijnenburg, and H.A.G. Wijshoff, "Reshaping access patterns for generating sparse codes, Lecture Notes Computer Science, no. 892, pp. 406-422. Springer-Verlag, 1995.
[8] A.J.C. Bik and H.A.G. Wijshoff, "Advanced compiler optimizations for sparse computations," Proc. Supercomputing 93, pp. 430-439, 1993.
[9] A.J.C. Bik and H.A.G. Wijshoff, "Compilation techniques for sparse matrix computations," Proc. Int'l Conf. Supercomputing, pp. 416-424, 1993.
[10] A.J.C. Bik and H.A.G. Wijshoff, "Nonzero structure analysis," Proc. Int'l Conf. Supercomputing, pp. 226-235, 1994.
[11] A.J.C. Bik and H.A.G. Wijshoff, "A note on dealing with subroutines and functions in the automatic generation of sparse codes," Technical Report pp. 94-43, Department of Computer Science, Leiden University, 1994.
[12] A.J.C. Bik and H.A.G. Wijshoff, "On automatic data structure selection and code generation for sparse computations," U. Banerjee, D. Gelernter, A. Nicolau, and D. Padua, eds., Lecture Notes in Computer Science, no. 768, pages 57-75. Springer-Verlag, 1994.
[13] A.J.C. Bik and H.A.G. Wijshoff, "Construction of representative simple sections," Proc. 1995 Int'l Conf. Parallel Processing, to appear.
[14] A.J.C. Bik and H.A.G. Wijshoff, "On strategies for generating sparse codes," Technical Report no 95-01, Department. of Computer Science, Leiden University, 1995.
[15] P. Brinkhaus, Compiler analysis of procedure calls. Master's thesis, Utrecht University, INF/SCR-93-13, 1993.
[16] K. Cooper, M.W. Hall, and K. Kennedy, "Procedure Cloning," Proc. 1992 IEEE Int'l Conf. Computer Language,Oakland, Calif., Apr. 1992.
[17] D.S. Dodson, R.G. Grimes, and J.G. Lewis, "Algorithm 692: Model implementation and test package for the sparse linear algebra subprograms," ACM Trans. Mathematical Software, vol. 17, pp. 264-272, 1991.
[18] D.S. Dodson, R.G. Grimes, and J.G. Lewis, "Sparse extensions to the FORTRAN basic linear algebra subprograms," ACM Trans. Mathematical Software, vol. 17, pp. 253-263, 1991.
[19] J. Dongarra,I.S. Duff,C.D. Sorensen,, and H.A. van der Vorst,Solving Linear Systems on Vector and Shared Memory Computers.Philadelphia: SIAM, 1991.
[20] I.S. Duff, "A sparse future," I.S. Duff, ed., Sparse Matrices and their Uses, pp. 1-29, Academic Press, London, 1981.
[21] I.S. Duff, "Data structures, algorithms and software for sparse matrices," D.J. Evans, ed., Sparsity and Its Applications, pp. 1-29. Cambridge University Press, 1985.
[22] J. Leonard et al., "A 66-MHz DSP-Augmented RAMDAC for Smooth-Shaded Graphics Applications," IEEE J. of Solid-State Circuits, Vol. 26, No. 3, Mar. 1991, pp. 217-228.
[23] I.S. Duff, R. Grimes, and J. Lewis, “Sparse Matrix Test Problems,” ACM Trans. Mathematical Software, vol. 15, pp. 1–14, Mar. 1989.
[24] I.S. Duff and J.K. Reid, "Some design features of a sparse matrix code," ACM Trans. Mathematical Software, pp. 18-35, 1979.
[25] J. Engelfriet, “Attribute Grammars: Attribute Evaluation Methods,” Methods and Tools for Compiler Construction, B. Lorho, ed., pp. 103-138, Cambridge, England: Cambridge Univ. Press, 1984.
[26] K.A. Gallivan, B.A. Marsolf, and H.A.G. Wijshoff, "MCSPARSE: A parallel sparse unsymmetric linear system solver," Technical Report no. 1142, Center for Supercomputing Research and Development, University of Illinios, 1991.
[27] A. George and J. W.-H. Liu,Computer Solution of Large Sparse Positive Difinite Systems. Englewood Cliffs, NJ: Prentice-Hall, 1981.
[28] F.G. Gustavson, "Two fast algorithms for sparse matrices: Multiplication and permuted transposition," ACM Trans. Mathematical Software, vol. 4, pp. 250-269, 1978.
[29] W. Li and K. Pingali, “A Singular Loop Transformation Framework Based on Non-Singular Matrices,” Proc. Fifth Workshop Languages and Compilers for Parallel Computers, pp. 249-260, 1992.
[30] K. J. Mann, "Inversion of large sparse matrices: Direct methods," J. Noye, ed., Numerical Solutions of Partial Differential Equations, pp. 313-366.Amsterdam: North-HollandPublishing Company, 1982.
[31] D.A. Padua and M.J. Wolfe, "Advanced Compiler Optimizations for Supercomputers," Comm. ACM, vol. 29, Dec. 1986.
[32] S. Pissanetsky, Sparse Matrix Technology. Academic Press, London, 1984.
[33] C.D. Polychronoupolos, Parallel Programming and Compilers.Boston, Mass: Kluwer Academic Publishers, 1988.
[34] Y. Saad and H.A.G. Wijshoff, "Spark: A benchmark package for sparse computations," Proc. 1990 Int'l Conf. on Supercomputing, pp. 239-253, 1990.
[35] J. Saltz,K. Crowley,R. Mirchandaney,, and H. Berryman,“Run-time scheduling and execution of loops on message passing machines,” J. Parallel and Distributed Computing, vol. 8, pp. 303–312, 1990.
[36] J. Saltz, R. Mirchandaney, and K. Crowley, "The doconsider Loop," Proc. 1989 Int'l Conf. Supercomputing, pp. 29-40, June 1989.
[37] J.H. Saltz, R. Mirchandaney, and K. Crowley, "Run-Time Parallelization and Scheduling of Loops," IEEE Trans. Computers, vol. 40, May 1991.
[38] R.P. Tewarson., Sparse Matrices. Academic Press, New York, 1973.
[39] H.A.G. Wijshoff, "Implementing sparse BLAS primitives on concurrent/vector processors: a case study," Technical Report no. 843, Center for Supercomputing Research and Development, Univ. of Illinios, 1989.
[40] M. Wolf and M. Lam, “A Loop Transformation Theory and an Algorithm to Maximize Parallelism,” IEEE Trans. Parallel and Distributed Systems, vol. 2, no. 4, Oct. 1991.
[41] M. Wolfe,“Optimizing Supercompilers For Supercomputers.”Cambridge, MA: MIT, 1989.
[42] H. Zima, Supercompilers for Parallel and Vector Computers.New York: ACM Press, 1990.
[43] Z. Zlatev, Computational Methods for General Sparse Matrices. Kluwer Academic Publishers, 1991.

Index Terms:
Data structure selection, data structure transformations, restructuring compilers, sparse matrix computations, program transformations.
Citation:
Aart J.C. Bik, Harry A.G. Wijshoff, "Automatic Data Structure Selection and Transformation for Sparse Matrix Computations," IEEE Transactions on Parallel and Distributed Systems, vol. 7, no. 2, pp. 109-126, Feb. 1996, doi:10.1109/71.485501
Usage of this product signifies your acceptance of the Terms of Use.