
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Aart J.C. Bik, Harry A.G. Wijshoff, "Automatic Data Structure Selection and Transformation for Sparse Matrix Computations," IEEE Transactions on Parallel and Distributed Systems, vol. 7, no. 2, pp. 109126, February, 1996.  
BibTex  x  
@article{ 10.1109/71.485501, author = {Aart J.C. Bik and Harry A.G. Wijshoff}, title = {Automatic Data Structure Selection and Transformation for Sparse Matrix Computations}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {7}, number = {2}, issn = {10459219}, year = {1996}, pages = {109126}, doi = {http://doi.ieeecomputersociety.org/10.1109/71.485501}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Parallel and Distributed Systems TI  Automatic Data Structure Selection and Transformation for Sparse Matrix Computations IS  2 SN  10459219 SP109 EP126 EPD  109126 A1  Aart J.C. Bik, A1  Harry A.G. Wijshoff, PY  1996 KW  Data structure selection KW  data structure transformations KW  restructuring compilers KW  sparse matrix computations KW  program transformations. VL  7 JA  IEEE Transactions on Parallel and Distributed Systems ER   
Abstract—The problem of compiler optimization of sparse codes is well known and no satisfactory solutions have been found yet. One of the major obstacles is formed by the fact that sparse programs explicitly deal with particular data structures selected for storing sparse matrices. This explicit data structure handling obscures the functionality of a code to such a degree that optimization of the code is prohibited, for instance, by the introduction of indirect addressing. The method presented in this paper delays data structure selection until the compile phase, thereby allowing the compiler to combine code optimization with explicit data structure selection. This method enables the compiler to generate efficient code for sparse computations. Moreover, the task of the programmer is greatly reduced in complexity.
[1] A.V. Aho, R. Sethi, and J.D. Ullman, Compilers, Principles, Techniques and Tools.New York: AddisonWesley, 1985.
[2] V. Balasundaram, Interactive Parallelization of Numerical Scientific Programs. PhD thesis, Department of Computer Science, Rice Univ., 1989.
[3] V. Balasundaram, “A Mechanism for Keeping Useful Internal Information in Parallel Programming Tools: The Data Access Descriptor,” J. Parallel and Distributed Computing, vol. 9, pp. 154170, 1990.
[4] U. Banerjee, "Unimodular transformations of double loops," Proc. Third Workshop on Languages and Compilers for Parallel Computing, 1990.
[5] U. Banerjee, Loop Transformations for Restructuring Compilers, Kluwer Academic Publishers, Boston, Mass., 1997.
[6] A.J.C. Bik, A prototype restructuring compiler. Master's thesis, Utrecht Univ., 1992. INF/SCR9211.
[7] A.J.C. Bik, P.M.W. Knijnenburg, and H.A.G. Wijshoff, "Reshaping access patterns for generating sparse codes, Lecture Notes Computer Science, no. 892, pp. 406422. SpringerVerlag, 1995.
[8] A.J.C. Bik and H.A.G. Wijshoff, "Advanced compiler optimizations for sparse computations," Proc. Supercomputing 93, pp. 430439, 1993.
[9] A.J.C. Bik and H.A.G. Wijshoff, "Compilation techniques for sparse matrix computations," Proc. Int'l Conf. Supercomputing, pp. 416424, 1993.
[10] A.J.C. Bik and H.A.G. Wijshoff, "Nonzero structure analysis," Proc. Int'l Conf. Supercomputing, pp. 226235, 1994.
[11] A.J.C. Bik and H.A.G. Wijshoff, "A note on dealing with subroutines and functions in the automatic generation of sparse codes," Technical Report pp. 9443, Department of Computer Science, Leiden University, 1994.
[12] A.J.C. Bik and H.A.G. Wijshoff, "On automatic data structure selection and code generation for sparse computations," U. Banerjee, D. Gelernter, A. Nicolau, and D. Padua, eds., Lecture Notes in Computer Science, no. 768, pages 5775. SpringerVerlag, 1994.
[13] A.J.C. Bik and H.A.G. Wijshoff, "Construction of representative simple sections," Proc. 1995 Int'l Conf. Parallel Processing, to appear.
[14] A.J.C. Bik and H.A.G. Wijshoff, "On strategies for generating sparse codes," Technical Report no 9501, Department. of Computer Science, Leiden University, 1995.
[15] P. Brinkhaus, Compiler analysis of procedure calls. Master's thesis, Utrecht University, INF/SCR9313, 1993.
[16] K. Cooper, M.W. Hall, and K. Kennedy, "Procedure Cloning," Proc. 1992 IEEE Int'l Conf. Computer Language,Oakland, Calif., Apr. 1992.
[17] D.S. Dodson, R.G. Grimes, and J.G. Lewis, "Algorithm 692: Model implementation and test package for the sparse linear algebra subprograms," ACM Trans. Mathematical Software, vol. 17, pp. 264272, 1991.
[18] D.S. Dodson, R.G. Grimes, and J.G. Lewis, "Sparse extensions to the FORTRAN basic linear algebra subprograms," ACM Trans. Mathematical Software, vol. 17, pp. 253263, 1991.
[19] J. Dongarra,I.S. Duff,C.D. Sorensen,, and H.A. van der Vorst,Solving Linear Systems on Vector and Shared Memory Computers.Philadelphia: SIAM, 1991.
[20] I.S. Duff, "A sparse future," I.S. Duff, ed., Sparse Matrices and their Uses, pp. 129, Academic Press, London, 1981.
[21] I.S. Duff, "Data structures, algorithms and software for sparse matrices," D.J. Evans, ed., Sparsity and Its Applications, pp. 129. Cambridge University Press, 1985.
[22] J. Leonard et al., "A 66MHz DSPAugmented RAMDAC for SmoothShaded Graphics Applications," IEEE J. of SolidState Circuits, Vol. 26, No. 3, Mar. 1991, pp. 217228.
[23] I.S. Duff, R. Grimes, and J. Lewis, “Sparse Matrix Test Problems,” ACM Trans. Mathematical Software, vol. 15, pp. 1–14, Mar. 1989.
[24] I.S. Duff and J.K. Reid, "Some design features of a sparse matrix code," ACM Trans. Mathematical Software, pp. 1835, 1979.
[25] J. Engelfriet, “Attribute Grammars: Attribute Evaluation Methods,” Methods and Tools for Compiler Construction, B. Lorho, ed., pp. 103138, Cambridge, England: Cambridge Univ. Press, 1984.
[26] K.A. Gallivan, B.A. Marsolf, and H.A.G. Wijshoff, "MCSPARSE: A parallel sparse unsymmetric linear system solver," Technical Report no. 1142, Center for Supercomputing Research and Development, University of Illinios, 1991.
[27] A. George and J. W.H. Liu,Computer Solution of Large Sparse Positive Difinite Systems. Englewood Cliffs, NJ: PrenticeHall, 1981.
[28] F.G. Gustavson, "Two fast algorithms for sparse matrices: Multiplication and permuted transposition," ACM Trans. Mathematical Software, vol. 4, pp. 250269, 1978.
[29] W. Li and K. Pingali, “A Singular Loop Transformation Framework Based on NonSingular Matrices,” Proc. Fifth Workshop Languages and Compilers for Parallel Computers, pp. 249260, 1992.
[30] K. J. Mann, "Inversion of large sparse matrices: Direct methods," J. Noye, ed., Numerical Solutions of Partial Differential Equations, pp. 313366.Amsterdam: NorthHollandPublishing Company, 1982.
[31] D.A. Padua and M.J. Wolfe, "Advanced Compiler Optimizations for Supercomputers," Comm. ACM, vol. 29, Dec. 1986.
[32] S. Pissanetsky, Sparse Matrix Technology. Academic Press, London, 1984.
[33] C.D. Polychronoupolos, Parallel Programming and Compilers.Boston, Mass: Kluwer Academic Publishers, 1988.
[34] Y. Saad and H.A.G. Wijshoff, "Spark: A benchmark package for sparse computations," Proc. 1990 Int'l Conf. on Supercomputing, pp. 239253, 1990.
[35] J. Saltz,K. Crowley,R. Mirchandaney,, and H. Berryman,“Runtime scheduling and execution of loops on message passing machines,” J. Parallel and Distributed Computing, vol. 8, pp. 303–312, 1990.
[36] J. Saltz, R. Mirchandaney, and K. Crowley, "The doconsider Loop," Proc. 1989 Int'l Conf. Supercomputing, pp. 2940, June 1989.
[37] J.H. Saltz, R. Mirchandaney, and K. Crowley, "RunTime Parallelization and Scheduling of Loops," IEEE Trans. Computers, vol. 40, May 1991.
[38] R.P. Tewarson., Sparse Matrices. Academic Press, New York, 1973.
[39] H.A.G. Wijshoff, "Implementing sparse BLAS primitives on concurrent/vector processors: a case study," Technical Report no. 843, Center for Supercomputing Research and Development, Univ. of Illinios, 1989.
[40] M. Wolf and M. Lam, “A Loop Transformation Theory and an Algorithm to Maximize Parallelism,” IEEE Trans. Parallel and Distributed Systems, vol. 2, no. 4, Oct. 1991.
[41] M. Wolfe,“Optimizing Supercompilers For Supercomputers.”Cambridge, MA: MIT, 1989.
[42] H. Zima, Supercompilers for Parallel and Vector Computers.New York: ACM Press, 1990.
[43] Z. Zlatev, Computational Methods for General Sparse Matrices. Kluwer Academic Publishers, 1991.