Issue No. 06 - June (2001 vol. 50)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/12.931893
<p><b>Abstract</b>—This paper presents a system in which the already executing user code is continually and automatically reoptimized in the background, using dynamically collected execution profiles as a guide. Whenever a new code image has been constructed in the background in this manner, it is hot-swapped in place of the previously executing one. Control is then transferred to the new code and construction of yet another code image is initiated in the background. Two new runtime optimization techniques have been implemented in the context of this system: object layout adaptation and dynamic trace scheduling. The former technique constantly improves the storage layout of dynamically allocated data structures to improve data cache locality. The latter increases the instruction-level parallelism by continually adapting the instruction schedule to predominantly executed program paths. The empirical results presented in this paper make a case in favor of continuous optimization, but also indicate some of the pitfalls and current shortcomings of continuous optimization. If not applied judiciously, the costs of dynamic optimizations outweigh their benefit in many situations so that no break-even point is ever reached. In favorable circumstances, however, speed-ups of over 96 percent have been observed. It appears as if the main beneficiaries of continuous optimization are shared libraries in specific application domains which, at different times, can be optimized in the context of the currently dominant client application.</p>
Dynamic compilation, continuous optimization, memory optimization, trace scheduling, profiling.
Michael Franz, Thomas Kistler, "Continuous Program Optimization: Design and Evaluation", IEEE Transactions on Computers, vol. 50, no. , pp. 549-566, June 2001, doi:10.1109/12.931893