Issue No. 01 - January (1993 vol. 4)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/71.205655
<p>A practical processor self-scheduling scheme, trapezoid self-scheduling, is proposed for arbitrary parallel nested loops in shared-memory multiprocessors. Generally, loops are the richest source of parallelism in parallel programs. To dynamically allocate loop iterations to processors, one may achieve load balancing among processors at the expense of run-time scheduling overhead. By linearly decreasing the chunk size at run time, the best tradeoff between the scheduling overhead and balanced workload can be obtained in the proposed trapezoid self-scheduling approach. Due to its simplicity and flexibility, this approach can be efficiently implemented in any parallel compiler. The small and predictable number of chores also allow efficient management of memory in a static fashion. The experiments conducted in a 96-node Butterfly GP-1000 clearly show the advantage of the trapezoid self-scheduling over other well-known self-scheduling approaches.</p>
Index Termsdynamic allocation; memory management; parallel compilers; processor self-scheduling;trapezoid self-scheduling; parallel nested loops; shared-memory multiprocessors; parallelprograms; loop iterations; load balancing; run-time scheduling overhead; chunk size;Butterfly GP-1000; parallel programming; program compilers; scheduling; shared memory systems
L. Ni and T. Tzen, "Trapezoid Self-Scheduling: A Practical Scheduling Scheme for Parallel Compilers," in IEEE Transactions on Parallel & Distributed Systems, vol. 4, no. , pp. 87-98, 1993.