The Community for Technology Leaders
Green Image
<p>In a shared-memory multiprocessor system, it may be more efficient to schedule a taskon one processor than on another if relevant data already reside in a particularprocessor's cache. The effects of this type of processor affinity are examined. It isobserved that tasks continuously alternate between executing at a processor andreleasing this processor due to I/O, synchronization, quantum expiration, or preemption.Queuing network models of different abstract scheduling policies are formulated, spanning the range from ignoring affinity to fixing tasks on processors. These models are solved via mean value analysis, where possible, and by simulation otherwise. An analytic cache model is developed and used in these scheduling models to include the effects of an initial burst of cache misses experienced by tasks when they return to a processor forexecution. A mean-value technique is also developed and used in the scheduling modelsto include the effects of increased bus traffic due to these bursts of cache misses. Onlya small amount of affinity information needs to be maintained for each task. Theimportance of having a policy that adapts its behavior to changes in system load isdemonstrated.</p>
Index Termsqueueing network models; processor-cache affinity information; shared-memorymultiprocessor scheduling; I/O; synchronization; quantum expiration; preemption; meanvalue analysis; analytic cache model; buffer storage; performance evaluation; queueingtheory; scheduling; shared memory systems

E. Lazowska and M. Squiillante, "Using Processor-Cache Affinity Information in Shared-Memory Multiprocessor Scheduling," in IEEE Transactions on Parallel & Distributed Systems, vol. 4, no. , pp. 131-143, 1993.
93 ms
(Ver 3.3 (11022016))