The Community for Technology Leaders
2013 International Conference on Computing, Networking and Communications (ICNC) (2010)
Higashi, Hiroshima Japan
Nov. 17, 2010 to Nov. 19, 2010
ISBN: 978-0-7695-4277-5
pp: 63-70
ABSTRACT
We have proposed an auto-memoization processor based on computation reuse, and merged it with speculative multithreading based on value prediction into a parallel early computation. In the past model, the parallel early computation detects each iteration of loops as a reusable block. This paper proposes a new parallel early computation model, which integrates multiple continuous iterations into a reusable block automatically and dynamically without modifing executable binaries. We also propose a model for automatically detecting how many iterations should be integrated into one reusable block. Our model reduces the overhead of computation reuse, and further exploits reuse tables. The result of the experiment with SPEC CPU95 FP suite benchmarks shows that the new model improves the maximum speedup from 40.5% to 57.6%, and the average speedup from 15.0% to 26.0%.
INDEX TERMS
CITATION
Yasuhiko Nakashima, Tomoki Ikegaya, Hiroshi Matsuo, Tomoaki Tsumura, "A Speed-Up Technique for an Auto-Memoization Processor by Collectively Reusing Continuous Iterations", 2013 International Conference on Computing, Networking and Communications (ICNC), vol. 00, no. , pp. 63-70, 2010, doi:10.1109/IC-NC.2010.46
102 ms
(Ver 3.3 (11022016))