Issue No. 04 - April (1994 vol. 5)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/71.273046
<p>Loops are the single largest source of parallelism in many applications. One way to exploit this parallelism is to execute loop iterations in parallel on different processors. Previous approaches to loop scheduling attempted to achieve the minimum completion time by distributing the workload as evenly as possible while minimizing the number of synchronization operations required. The authors consider a third dimension to the problem of loop scheduling on shared-memory multiprocessors: communication overhead caused by accesses to nonlocal data. They show that traditional algorithms for loop scheduling, which ignore the location of data when assigning iterations to processors, incur a significant performance penalty on modern shared-memory multiprocessors. They propose a new loop scheduling algorithm that attempts to simultaneously balance the workload, minimize synchronization, and co-locate loop iterations with the necessary data. They compare the performance of this new algorithm to other known algorithms by using five representative kernel programs on a Silicon Graphics multiprocessor workstation, a BBN Butterfly, a Sequent Symmetry, and a KSR-1, and show that the new algorithm offers substantial performance improvements, up to a factor of 4 in some cases. The authors conclude that loop scheduling algorithms for shared-memory multiprocessors cannot afford to ignore the location of data, particularly in light of the increasing disparity between processor and memory speeds.</p>
Index Termsshared memory systems; scheduling; performance evaluation; loop scheduling; processoraffinity; shared-memory multiprocessors; loop iterations; communication overhead;iterations; kernel programs; Silicon Graphics multiprocessor; BBN Butterfly; SequentSymmetry; KSR-1; performance improvements; synchronization; load imbalance
T. LeBlanc and E. Markatos, "Using Processor Affinity in Loop Scheduling on Shared-Memory Multiprocessors," in IEEE Transactions on Parallel & Distributed Systems, vol. 5, no. , pp. 379-400, 1994.