The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—This paper presents a unified framework that optimizes out-of-core programs by exploiting locality and parallelism, and reducing communication overhead. For out-of-core problems where the data set sizes far exceed the size of the available in-core memory, it is particularly important to exploit the memory hierarchy by optimizing the I/O accesses. We present algorithms that consider both iteration space (loop) and data space (file layout) transformations in a unified framework. We show that the performance of an out-of-core loop nest containing references to out-of-core arrays can be improved by using a suitable combination of file layout choices and loop restructuring transformations. Our approach considers array references one-by-one and attempts to optimize each reference for parallelism and locality. When there are references for which parallelism optimizations do not work, communication is vectorized so that data transfer can be performed before the innermost loop. Results from hand-compiles on IBM SP-2 and Intel Paragon distributed-memory message-passing architectures show that this approach reduces the execution times and improves the overall speedups. In addition, we extend the base algorithm to work with file layout constraints and show how it is useful for optimizing programs that consist of multiple loop nests.</p>
I/O-intensive codes, optimizing compilers, loop and data transformations, out-of-core computations, file layouts.

J. Ramanujam, A. Choudhary, M. Kandemir and M. A. Kandaswamy, "A Unified Framework for Optimizing Locality, Parallelism, and Communication in Out-of-Core Computations," in IEEE Transactions on Parallel & Distributed Systems, vol. 11, no. , pp. 648-668, 2000.
84 ms
(Ver 3.3 (11022016))