The Community for Technology Leaders
2008 International Conference on Parallel Architectures and Compilation Techniques (PACT) (2008)
Toronto, ON, Canada
Oct. 25, 2008 to Oct. 29, 2008
ISBN: 978-1-5090-3021-7
pp: 102-111
Hiroshige Hayashizaki , Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Japan
Yutaka Sugawara , Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Japan
Mary Inaba , Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Japan
Kei Hiraki , Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Japan
ABSTRACT
Massively parallel machines that integrate a large number of simple processors and small scratch-pad memories (SPMs) into a single chip can achieve a high peak performance per watt of power. In these machines, communication optimizations are important because the communication bandwidth tends to be a bottleneck. Previously proposed communication optimizations using copy candidates, which have been shown to be effective, detect frequently reused array regions by compile-time analysis and copy the regions to scratch-pad memories nearer to the processors. However, they have been proposed for uniprocessor systems or small parallel machines with one or more layers of scratch-pad memories, and the analysis time increases when they are applied to massively parallel machines. In this paper, we propose Multilayer Copy-candidate Analysis for Massively Parallel machines (MCAMP), a communication optimization method for massively parallel machines. MCAMP re-formalizes the framework used in earlier works and improves the scalability of the analysis by assuming the homogeneity of the target systems. We implemented an MCAMP optimizer, which takes an input program that consists of perfectly nested loops containing array references and computation codes, and generates optimized communication. We measured the performance of the output programs of the MCAMP optimizer by executing them on a real massively parallel machine GRAPE-DR using a software tool chain that we also implemented. We showed that MCAMP can achieve optimal data transfer patterns and comparable performance to that of hand-optimized codes with a short analysis time.
INDEX TERMS
Program processors, Parallel machines, Optimization, Pipelines, Micromechanical devices, Memory management, Information science
CITATION
Hiroshige Hayashizaki, Yutaka Sugawara, Mary Inaba, Kei Hiraki, "MCAMP: Communication optimization on Massively Parallel Machines with hierarchical scratch-pad memory", 2008 International Conference on Parallel Architectures and Compilation Techniques (PACT), vol. 00, no. , pp. 102-111, 2008, doi:
98 ms
(Ver 3.3 (11022016))