The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—This paper presents a new compiler optimization algorithm that parallelizes applications for symmetric, shared-memory multiprocessors. The algorithm considers data locality, parallelism, and the granularity of parallelism. It uses dependence analysis and a simple cache model to drive its optimizations. It also optimizes across procedures by using interprocedural analysis and transformations. We validate the algorithm by hand-applying it to sequential versions of parallel, Fortran programs operating over dense matrices. The programs initially were hand-coded to target a variety of parallel machines using loop parallelism. We ignore the user's parallel loop directives, and use known and implemented dependence and interprocedural analysis to find parallelism. We then apply our new optimization algorithm to the resulting program. We compare the original parallel program to the hand-optimized program, and show that our algorithm improves three programs, matches four programs, and degrades one program in our test suite on a shared-memory, bus-based parallel machine with local caches. This experiment suggests existing dependence and interprocedural array analysis can automatically detect user parallelism, and demonstrates that user parallelized codes often benefit from our compiler optimizations, providing evidence that we need <it>both</it> parallel algorithms and compiler optimizations to effectively utilize parallel machines.</p>
Program parallelization, parallelization techniques, program optimization, data locality, restructuring compilers, performance evaluation.

K. S. McKinley, "A Compiler Optimization Algorithm for Shared-Memory Multiprocessors," in IEEE Transactions on Parallel & Distributed Systems, vol. 9, no. , pp. 769-787, 1998.
102 ms
(Ver 3.3 (11022016))