The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—Wide-issue processors continue to achieve higher performance by exploiting greater instruction-level parallelism. Dynamic techniques such as out-of-order execution and hardware speculation have proven effective at increasing instruction throughput. Run-time optimization promises to provide an even higher level of performance by adaptively applying aggressive code transformations on a larger scope. This paper presents a new hardware mechanism for generating and deploying runtime optimized code. The mechanism can be viewed as a filtering system that resides in the retirement stage of the processor pipeline, accepts an instruction execution stream as input, and produces instruction profiles and sets of linked, optimized traces as output. The code deployment mechanism uses an extension to the branch prediction mechanism to migrate execution into the new code without modifying the original code. These new components do not add delay to the execution of the program except during short bursts of reoptimization. This technique provides a strong platform for runtime optimization because the hot execution regions are extracted, optimized, and written to main memory for execution and because these regions persist across context switches. The current design of the framework supports a suite of optimizations, including partial function inlining (even into shared libraries), code straightening optimizations, loop unrolling, and peephole optimizations.</p>
Postlink optimization, runtime optimization, dynamic optimization, hardware profiling, low-overhead profiling, code layout, program hot spot, partial function inlining, trace formation and optimization.

W. W. Hwu et al., "An Architectural Framework for Runtime Optimization," in IEEE Transactions on Computers, vol. 50, no. , pp. 567-589, 2001.
83 ms
(Ver 3.3 (11022016))