This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Trapezoid Self-Scheduling: A Practical Scheduling Scheme for Parallel Compilers
January 1993 (vol. 4 no. 1)
pp. 87-98

A practical processor self-scheduling scheme, trapezoid self-scheduling, is proposed for arbitrary parallel nested loops in shared-memory multiprocessors. Generally, loops are the richest source of parallelism in parallel programs. To dynamically allocate loop iterations to processors, one may achieve load balancing among processors at the expense of run-time scheduling overhead. By linearly decreasing the chunk size at run time, the best tradeoff between the scheduling overhead and balanced workload can be obtained in the proposed trapezoid self-scheduling approach. Due to its simplicity and flexibility, this approach can be efficiently implemented in any parallel compiler. The small and predictable number of chores also allow efficient management of memory in a static fashion. The experiments conducted in a 96-node Butterfly GP-1000 clearly show the advantage of the trapezoid self-scheduling over other well-known self-scheduling approaches.

[1] The Parallel Computing Forum,PCF Fortran: Language Definition, rev. 1.5 ed., Aug. 1990.
[2] BBN Advanced Computers Inc., Cambridge, MA.,Mach 1000 Fortran Compiler Reference, rev. 1.0 ed., Nov. 1988.
[3] E. Lusk and R. A. Overbeek, "Implementation of monitors with macros: A programming aid for HEP and other parallel procrssors," Tech. Rep. ANL-MCS-83-97, Argonne National Lab., 1983.
[4] Alliant Computer System Corp., Acton, MA,FX/Series Architecture Manual, 1985.
[5] M. Weiss, Z. Fang, C. R. Morgan, and P. Belmont, "Effective dynamic scheduling and memory management on parallel processing systems," inProc. 1989 COMPSAC, Sept. 1989, pp. 122-129.
[6] P. Tang and P.-C. Yew, "Processor scheduling for multiple-nested parallel loops." inProc. 1986 Int. Conf. Parallel Processing, Aug. 1986, pp. 528-535.
[7] C. D. Polychronopoulos, D. J. Kuck, and D. A. Padua, "Execution of parallel loops on parallel processor systems," inProc. 1986 Int. Conf. Parallel Processing, Aug. 1986, pp. 519-527.
[8] Z. Fang, P.-C. Yew, P. Tang, and C. Q. Zhu, "Dynamic processor self-scheduling for general parallel nested loops," inProc. 1987 Int. Conf. Parallel Processing, Aug. 1987, pp. 1-10.
[9] C. Polychronopoulos and D. Kuck, "Guided self-scheduling: A practical scheduling scheme for parallel supercomputers,"IEEE Tran. Comput., 1987.
[10] L. M. Ni and C.-F. E. Wu, "Design tradeoffs for processor scheduling in shared-memory multiprocessor systems,"IEEE Trans. Software Eng., vol. 15, pp. 327-334, Mar. 1989. also inProc. 1985 Int. Conf. Parallel Processing, pp. 63-70.
[11] A. K. Nanda, H. Shing, T. H. Tzen, and L. M. Ni, "A replicate workload framework to study performance degradation in shared-memory multiprocessors," inProc. 1990 Int. Conf. Parallel Processing, vol. I, Aug. 1990, pp. 161-168.
[12] BBN Advanced Computers Inc., Cambridge, MA.,Overview of the Butterfly GP1000, Nov. 1988.
[13] T. H. Tzen and L. M. Ni, "Trapezoid self-scheduling: A practical scheduling scheme for parallel compilers." Tech. Rep. MSU-CPS-ACS- 27, Michigan State Univ., 1990.
[14] G. F. Pfister and V. A. Norton, "Hot-spot contention and combining in multistage interconnection networks,"IEEE Trans. Comput., vol. C-34, pp. 934-948, Oct. 1985.

Index Terms:
Index Termsdynamic allocation; memory management; parallel compilers; processor self-scheduling;trapezoid self-scheduling; parallel nested loops; shared-memory multiprocessors; parallelprograms; loop iterations; load balancing; run-time scheduling overhead; chunk size;Butterfly GP-1000; parallel programming; program compilers; scheduling; shared memory systems
Citation:
T.H. Tzen, L.M. Ni, "Trapezoid Self-Scheduling: A Practical Scheduling Scheme for Parallel Compilers," IEEE Transactions on Parallel and Distributed Systems, vol. 4, no. 1, pp. 87-98, Jan. 1993, doi:10.1109/71.205655
Usage of this product signifies your acceptance of the Terms of Use.