The Community for Technology Leaders
2014 Second International Symposium on Computing and Networking (CANDAR) (2014)
Shizuoka, Japan
Dec. 10, 2014 to Dec. 12, 2014
ISBN: 978-1-4799-4152-0
pp: 426-432
We have proposed a processor called Auto-Memoization Processor which is based on computation reuse, and merged it with speculative multi-threading based on value prediction into a mechanism called Parallel Speculative Execution. The processor dynamically detects functions and loop iterations as reusable blocks, and registers their inputs and outputs into the table called Reuse Table automatically. Then, when the processor detects the same block, to apply computation reuse to the block, the processor compares the current input sequence with the previous input sequences registered in Reuse Table. In this paper, we propose a hinting technique for Auto-Memoization Processor based on static binary analysis. The hint indicates two distinctive types of input for loop bodies. One input type is unchanging value. When applying computation reuse to a loop, the processor can skip comparing such unchanging inputs with the values on Reuse Table. The other input type is unmonotonous changing value. The loops which have unmonotonous changing inputs will not benefit from computation reuse, and the processor can stop applying useless computation reuse to such loop iterations. By hinting these types of input to the processor, the overhead of Auto-Memoization Processor can be reduced. The result of the experiment with SPEC CPU95 benchmark suite shows that the hinting technique improves the maximum speedup from 40.6%to 51.8%, and the average speedup from 11.9% to 16.5%.
Registers, Indexes, Parallel processing, Delays, Engines, Impedance matching, Lattices

T. Tsumura, Y. Shibata, K. Kamimura, T. Tsumura and Y. Nakashima, "Hinting for Auto-Memoization Processor Based on Static Binary Analysis," 2014 Second International Symposium on Computing and Networking (CANDAR), Shizuoka, Japan, 2014, pp. 426-432.
169 ms
(Ver 3.3 (11022016))