Subscribe

Issue No.09 - September (2008 vol.19)

pp: 1263-1279

Yuxiong He , Singapore MIT Alliance, Nanyang Technological University, Singapore

Wen-Jing Hsu , Nanyang Technological University, Singapore

Charles E. Leiserson , MIT, Cambridge

DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPDS.2008.39

ABSTRACT

Multiprocessor scheduling in a shared multiprogramming environment can be structured in two levels, where a kernel-level job scheduler allots processors to jobs and a user-level thread scheduler maps the ready threads of a job onto the allotted processors. We present two provably-efficient two-level scheduling schemes called G-RAD and S-RAD respectively. Both schemes use the same job scheduler RAD for the processor allotments that ensures fair allocation under all levels of workload. In G-RAD, RAD is combined with a greedy thread scheduler suitable for centralized scheduling; in S-RAD, RAD is combined with a work-stealing thread scheduler more suitable for distributed settings. Both G-RAD and S-RAD are non-clairvoyant. Moreover, they provide effective control over the scheduling overhead and ensure efficient utilization of processors. We also analyze the competitiveness of both G-RAD and S-RAD with respect to an optimal clairvoyant scheduler. In terms of makespan, both schemes can achieve O(1)-competitiveness for any set of jobs with arbitrary release time. In terms of mean response time, both schemes are O(1)-competitive for arbitrary batched jobs. To the best of our knowledge, G-RAD and S-RAD are the first non-clairvoyant scheduling algorithms that guarantee provable efficiency, fairness and minimal overhead.

INDEX TERMS

Scheduling and task partitioning, Multiple Data Stream Architectures (Multiprocessors), General

CITATION

Yuxiong He, Wen-Jing Hsu, Charles E. Leiserson, "Provably Efficient Online Nonclairvoyant Adaptive Scheduling",

*IEEE Transactions on Parallel & Distributed Systems*, vol.19, no. 9, pp. 1263-1279, September 2008, doi:10.1109/TPDS.2008.39REFERENCES

- [3] K. Agrawal, Y. He, and C.E. Leiserson, “Work Stealing with Parallelism Feedback,”
Proc. 12th ACM Symp. Principles and Practice of Parallel Programming (PPoPP), 2007.- [7] G.E. Blelloch and J. Greiner, “A Provable Time and Space Efficient Implementation of NESL,”
Proc. ACM Int'l Conf. Functional Programming (ICFP '96), pp. 213-225, 1996.- [12] S.-H. Chiang and M.K. Vernon, “Dynamic vs. Static Quantum-Based Parallel Processor Allocation,”
Proc. Second Workshop Job Scheduling Strategies for Parallel Processing (JSSPP '96), pp. 200-223, 1996.- [13] W. Cirne and F. Berman,
A Model for Moldable Supercomputer Jobs, p. 59, 2001.- [15] X. Deng, N. Gu, T. Brecht, and K.C. Lu, “Preemptive Scheduling of Parallel Jobs on Multiprocessors,”
Proc. Seventh Ann. ACM-SIAM Symp. Discete Algorithms (SODA '96), pp. 159-167, 1996.- [16]
DESMO-J: A Framework for Discrete-Event Modelling and Simulation, http://asi-www.informatik.uni-hamburg.de desmoj/, 2008.- [18] J. Edmonds, D.D. Chinn, T. Brecht, and X. Deng, “Non-Clairvoyant Multiprocessor Scheduling of Jobs with Changing Execution Characteristics,”
J. Scheduling, vol. 6, no. 3, pp. 231-250, 2003.- [19] D.G. Feitelson, “Packing Schemes for Gang Scheduling,”
Proc. Second Workshop Job Scheduling Strategies for Parallel Processing (JSSPP '96), pp. 89-110, 1996.- [20] D.G. Feitelson,
Job Scheduling in Multiprogrammed Parallel Systems (Extended Version), IBM Research Report RC 19790 (87657), secondrevision, 1997.- [21] R.L. Graham, “Bounds on Multiprocessing Anomalies,”
SIAM J.Applied Math., vol. 17, no. 2, pp. 416-429, 1969.- [22] N. Gu, “Competitive Analysis of Dynamic Processor Allocation Strategies,” master's thesis, York Univ., 1995.
- [23] R.H. Halstead Jr., “Implementation of Multilisp: LISP on a Multiprocessor,”
Proc. ACM Symp. Lisp and Functional Programming (LFP '84), pp. 9-17, Aug. 1984.- [24] M. Harchol-Balter, “The Effect of Heavy-Tailed Job Size. Distributions on Computer System Design,”
Proc. Conf. Applications of Heavy Tailed Distributions in Economics, 1999.- [29] S. Lucco, “A Dynamic Scheduling Method for Irregular Parallel Programs,”
Proc. ACM Conf. Programming Language Design and Implementation (PLDI '92), pp. 200-211, 1992.- [31] R. Motwani, S. Phillips, and E. Torng, “Non-Clairvoyant Scheduling,”
Proc. Fourth Ann. ACM-SIAM Symp. Discete Algorithms (SODA '93), pp. 422-431, 1993.- [35] S. Sen, “Dynamic Processor Allocation for Adaptively Parallel Jobs,” master's thesis, Massachusetts Inst. Tech nology, 2004.
- [38] B. Song, “Scheduling Adaptively Parallel Jobs,” master's thesis, Massachusetts Inst. Tech nology, 1998.
- [39] M.S. Squillante, “On the Benefits and Limitations of Dynamic Partitioning in Parallel Computer Systems,”
Proc. Ninth IEEE Int'l Parallel Processing Symp. (IPPS '95), pp. 219-238, 1995.- [40] K. Guha and T.B. Brecht, “Using Parallel Program Characteristics in Dynamic Processor Allocation Policies,”
Performance Evaluation, vol. 27/28, pp. 519-539, 1996.- [41] A. Tucker and A. Gupta, “Process Control and Scheduling Issues for Multiprogrammed Shared-Memory Multiprocessors,”
Proc. 12th ACM Symp. Operating Systems Principles (SOSP '89), pp. 159-166, 1989.- [43] J. Zahorjan and C. McCann, “Processor Scheduling in Shared Memory Multiprocessors,”
Proc. ACM SIGMETRICS '90, pp.214-225, 1990.- [44] G.K. Zipf,
Human Behaviour and the Principle of Least-Effort. Addison-Wesley, 1949. |