This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Lattice-Based Memory Allocation
October 2005 (vol. 54 no. 10)
pp. 1242-1257
We investigate the problem of memory reuse in order to reduce the memory needed to store an array variable. We develop techniques that can lead to smaller memory requirements in the synthesis of dedicated processors or to more effective use by compiled code of software-controlled scratchpad memory. Memory reuse is well-understood for allocating registers to hold scalar variables. Its extension to arrays has been studied recently for multimedia applications, for loop parallelization, and for circuit synthesis from recurrence equations. In all such studies, the introduction of modulo operations to an otherwise affine mapping (of loop or array indices to memory locations) achieves the desired reuse. We develop here a new mathematical framework, based on critical lattices, that subsumes the previous approaches and provides new insight. We first consider the set of indices that conflict, those that cannot be mapped to the same memory cell. Next, we construct the set of differences of conflicting indices. We establish a correspondence between a valid modular mapping and a strictly admissible integer lattice—one having no nonzero element in common with the set of conflicting index differences. The memory required by an optimal modular mapping is equal to the determinant of the corresponding lattice. The memory reuse problem is thus reduced to the (still interesting and nontrivial) problem of finding a strictly admissible integer lattice of least determinant. We then propose and analyze several practical strategies for finding strictly admissible integer lattices, either optimal or optimal up to a multiplicative factor, and, hence, memory-saving modular mappings. We explain and analyze previous approaches in terms of our new framework.

[1] E. De Greef, F. Catthoor, and H. De Man, “Memory Size Reduction through Storage Order Optimization for Embedded Parallel Multimedia Applications,” Parallel Computing, vol. 23, pp. 1811-1837, 1997.
[2] F. Quilleré and S. Rajopadhye, “Optimizing Memory Usage in the Polyhedral Model,” ACM Trans. Programming Languages and Systems, vol. 22, no. 5, pp. 773-815, 2000.
[3] V. Lefebvre and P. Feautrier, “Automatic Storage Management for Parallel Programs,” Parallel Computing, vol. 24, pp. 649-671, 1998.
[4] V. Kathail, S. Aditya, R. Schreiber, B.R. Rau, D.C. Cronquist, and M. Sivaraman, “PICO: Automatically Designing Custom Computers,” Computer, vol. 35, no. 9, pp. 39-47, Sept. 2002.
[5] A. Darte, R. Schreiber, and G. Villard, “Lattice-Based Memory Allocation,” Proc. Sixth ACM Int'l Conf. Compilers, Architectures, and Synthesis for Embedded Systems (CASES '03), pp. 298-308, Oct. 2003.
[6] Synfora, http:/www.synfora.com, 2005.
[7] F. Catthoor et al., “Atomium: A Toolbox for Optimising Memory I/O Using Geometrical Models,” http://www.imec.be/designatomium/, 2005.
[8] R. Tronçon, M. Bruynooghe, G. Janssens, and F. Catthoor, “Storage Size Reduction by In-Place Mapping of Arrays,” Verification, Model Checking and Abstract Interpretation, Third Int'l Workshop, VMCAI 2002, A. Cortesi, ed., pp. 167-181, 2002.
[9] B. Kienhuis, E. Rijpkema, and E.F. Deprettere, “Compaan: Deriving Process Networks from Matlab for Embedded Signal Processing Architectures,” Proc. Eighth Int'l Workshop Hardware/Software Codesign (CODES '00), May 2000.
[10] A. Turjan and B. Kienhuis, “Storage Management in Process Networks Using the Lexicographically Maximal Preimage,” Proc. 14th Int'l Conf. Application-Specific Systems, Architectures, and Processors (ASAP '03), June 2003.
[11] P. Quinton et al., “Alpha Homepage: A Language Dedicated to the Synthesis of Regular Architectures,” http://www.irisa.fr/cosiALPHA, 2005.
[12] W. Thies, F. Vivien, J. Sheldon, and S. Amarasinghe, “A Unified Framework For Schedule And Storage Optimization,” Proc. Int'l Conf. Programming Language Design and Implementation (PLDI '01), pp. 232-242, 2001.
[13] M.M. Strout, L. Carter, J. Ferrante, and B. Simon, “Schedule-Independent Storage Mapping for Loops,” Proc. Eighth Int'l Conf. Architectural Support for Programming Languages and Operating Systems (ASPLOS '98), pp. 24-33, 1998.
[14] A. Darte, R. Schreiber, and G. Villard, “Lattice-Based Memory Allocation,” Technical Report RR2004-23, LIP, ENS-Lyon, http://www.ens-lyon.fr/LIP/Pub/Rapports/ RR/RR2004RR2004-23. ps.gz, Apr. 2004.
[15] M. Newman, Integral Matrices. Academic Press, 1972.
[16] A. Darte, M. Dion, and Y. Robert, “A Characterization of One-to-One Modular Mappings,” Parallel Processing Letters, vol. 5, no. 1, pp. 145-157, 1996.
[17] A. Darte, R. Schreiber, B.R. Rau, and F. Vivien, “Constructing and Exploiting Linear Schedules with Prescribed Parallelism,” ACM Trans. Design Automation of Electronic Systems, vol. 7, no. 1, pp. 159-172, 2002.
[18] P.M. Gruber and C.G. Lekkerkerker, Geometry of Numbers, second ed. North Holland, 1987.
[19] J.C. Lagarias, “Point Lattices,” Handbook of Combinatorics, R. Graham, M. Grötschel, and L. Lovász, eds., vol. I, ch. 19, pp. 919-966, Elsevier Science Publishers B.V., 1995.
[20] P.M. Gruber, “Geometry of Numbers,” Handbook of Convex Geometry, P. Gruber and J. Wills, eds., vol. B, ch. 3.1, pp. 739-763, Elsevier Science Publishers B.V., 1993.
[21] L. Lovász and H.E. Scarf, “The Generalized Basis Reduction Algorithm,” Math. Operations Research, vol. 17, no. 3, pp. 751-764, 1992.
[22] L. Hafer, “The Generalized Basis Reduction Algorithm (Annotated),” June 2000, http://www.cs.sfu.ca/lou/MITACSgrb.pdf.
[23] A.K. Lenstra, H.W. Lenstra, and L. Lovász, “Factoring Polynomials with Rational Coefficients,” Mathematische Annalen, vol. 261, pp. 515-534, 1982.
[24] P. Budnik and D.J. Kuck, “The Organization and Use of Parallel Memories,” IEEE Trans. Computers, vol. 20, no. 12, pp. 1566-1569, Dec. 1971.
[25] H.D. Shapiro, “Theoretical Limitations on the Efficient Use of Parallel Memories,” IEEE Trans. Computers, vol. 27, no. 5, pp. 421-428, May 1978.
[26] H.A.G. Wijshoff and J. van Leeuwen, “The Structure of Periodic Storage Schemes for Parallel Memories,” IEEE Trans. Computers, vol. 34, no. 6, pp. 501-505, June 1985.
[27] H.A.G. Wijshoff and J. van Leeuwen, “Periodic Storage Schemes with a Minimum Number of Memory Banks,” Technical Report RUU-CS-83-4, Rijksuniversiteit Utrecht, The Netherlands, Feb. 1983.
[28] H.A.G. Wijshoff and J. van Leeuwen, “Periodic versus Arbitrary Tessellations of the Plane Using Polyominos of a Single Type,” Technical Report RUU-CS-82-11, Rijksuniversiteit Utrecht, The Netherlands, July 1982.
[29] G. Tel, J. van Leeuwen, and H.A.G. Wijshoff, “The One Dimensional Skewing Problem,” Technical Report RUU-CS-89-23, Rijksuniversiteit Utrecht, The Netherlands, Oct. 1989.
[30] W. Jalby, J.-M. Frailong, and J. Lenfant, “Diamond Schemes: An Organization of Parallel Memories for Efficient Array Processing,” Technical Report 342, INRIA, Centre de Rocquencourt, 1984.
[31] G. Tel and H.A.G. Wijshoff, “Hierarchical Parallel Memory-Systems and Multi-Periodic Skewing Schemes,” Technical Report RUU-CS-85-24, Rijksuniversiteit Utrecht, The Netherlands, Aug. 1985.

Index Terms:
Index Terms- Program transformation, memory size reduction, admissible lattice, successive minima.
Citation:
Alain Darte, Robert Schreiber, Gilles Villard, "Lattice-Based Memory Allocation," IEEE Transactions on Computers, vol. 54, no. 10, pp. 1242-1257, Oct. 2005, doi:10.1109/TC.2005.167
Usage of this product signifies your acceptance of the Terms of Use.