The Community for Technology Leaders
RSS Icon
Issue No.09 - September (2008 vol.19)
pp: 1263-1279
Yuxiong He , Singapore MIT Alliance, Nanyang Technological University, Singapore
Wen-Jing Hsu , Nanyang Technological University, Singapore
Charles E. Leiserson , MIT, Cambridge
Multiprocessor scheduling in a shared multiprogramming environment can be structured in two levels, where a kernel-level job scheduler allots processors to jobs and a user-level thread scheduler maps the ready threads of a job onto the allotted processors. We present two provably-efficient two-level scheduling schemes called G-RAD and S-RAD respectively. Both schemes use the same job scheduler RAD for the processor allotments that ensures fair allocation under all levels of workload. In G-RAD, RAD is combined with a greedy thread scheduler suitable for centralized scheduling; in S-RAD, RAD is combined with a work-stealing thread scheduler more suitable for distributed settings. Both G-RAD and S-RAD are non-clairvoyant. Moreover, they provide effective control over the scheduling overhead and ensure efficient utilization of processors. We also analyze the competitiveness of both G-RAD and S-RAD with respect to an optimal clairvoyant scheduler. In terms of makespan, both schemes can achieve O(1)-competitiveness for any set of jobs with arbitrary release time. In terms of mean response time, both schemes are O(1)-competitive for arbitrary batched jobs. To the best of our knowledge, G-RAD and S-RAD are the first non-clairvoyant scheduling algorithms that guarantee provable efficiency, fairness and minimal overhead.
Scheduling and task partitioning, Multiple Data Stream Architectures (Multiprocessors), General
Yuxiong He, Wen-Jing Hsu, Charles E. Leiserson, "Provably Efficient Online Nonclairvoyant Adaptive Scheduling", IEEE Transactions on Parallel & Distributed Systems, vol.19, no. 9, pp. 1263-1279, September 2008, doi:10.1109/TPDS.2008.39
[1] K. Agrawal, Y. He, W.J. Hsu, and C.E. Leiserson, “Adaptive Task Scheduling with Parallelism Feedback,” Proc. 11th ACM Symp. Principles and Practice of Parallel Programming (PPoPP '06), pp. 100-109, 2006.
[2] K. Agrawal, Y. He, and C.E. Leiserson, “An Empirical Evaluation of Work Stealing with Parallelism Feedback,” Proc. 26th IEEE Int'l Conf. Distributed Computing Systems (ICDCS '06), pp. 19-29, 2006.
[3] K. Agrawal, Y. He, and C.E. Leiserson, “Work Stealing with Parallelism Feedback,” Proc. 12th ACM Symp. Principles and Practice of Parallel Programming (PPoPP), 2007.
[4] N.S. Arora, R.D. Blumofe, and C.G. Plaxton, “Thread Scheduling for Multiprogrammed Multiprocessors,” Proc. 10th ACM Symp. Parallel Algorithms and Architectures (SPAA '98), pp. 119-129, 1998.
[5] N. Avrahami and Y. Azar, “Minimizing Total Flow Time and Total Completion Time with Immediate Dispatching,” Proc. 15thACM Symp. Parallel Algorithms and Architectures (SPAA '03), pp. 11-18, 2003.
[6] L. Becchetti and S. Leonardi, “Nonclairvoyant Scheduling to Minimize the Total Flow Time on Single and Parallel Machines,” J. ACM, vol. 51, no. 4, pp. 517-539, 2004.
[7] G.E. Blelloch and J. Greiner, “A Provable Time and Space Efficient Implementation of NESL,” Proc. ACM Int'l Conf. Functional Programming (ICFP '96), pp. 213-225, 1996.
[8] R.D. Blumofe and C.E. Leiserson, “Scheduling Multithreaded Computations by Work Stealing,” J. ACM, vol. 46, no. 5, pp. 720-748, 1999.
[9] T. Brecht, X. Deng, and N. Gu, “Competitive Dynamic Multiprocessor Allocation for Parallel Applications,” Proc. Seventh IEEE Symp. Parallel and Distributed Processing (SPDP '95), pp. 448-455, 1995.
[10] F.W. Burton and M.R. Sleep, “Executing Functional Programs on a Virtual Tree of Processors,” Proc. Conf. Functional Programming Languages and Computer Architecture (FPCA '81), pp. 187-194, 1981.
[11] J. Chen and A. Miranda, “A Polynomial Time Approximation Scheme for General Multiprocessor Job Scheduling (Extended Abstract),” Proc. 31st Ann. ACM Symp. Theory of Computing (STOC '99), pp. 418-427, 1999.
[12] S.-H. Chiang and M.K. Vernon, “Dynamic vs. Static Quantum-Based Parallel Processor Allocation,” Proc. Second Workshop Job Scheduling Strategies for Parallel Processing (JSSPP '96), pp. 200-223, 1996.
[13] W. Cirne and F. Berman, A Model for Moldable Supercomputer Jobs, p. 59, 2001.
[14] X. Deng and P. Dymond, “On Multiprocessor System Scheduling,” Proc. Eighth ACM Symp. Parallel Algorithms and Architectures (SPAA '96), pp. 82-88, 1996.
[15] X. Deng, N. Gu, T. Brecht, and K.C. Lu, “Preemptive Scheduling of Parallel Jobs on Multiprocessors,” Proc. Seventh Ann. ACM-SIAM Symp. Discete Algorithms (SODA '96), pp. 159-167, 1996.
[16] DESMO-J: A Framework for Discrete-Event Modelling and Simulation, desmoj/, 2008.
[17] J. Edmonds, “Scheduling in the Dark,” Proc. 31st Ann. ACM Symp. Theory of Computing (STOC '99), pp. 179-188, 1999.
[18] J. Edmonds, D.D. Chinn, T. Brecht, and X. Deng, “Non-Clairvoyant Multiprocessor Scheduling of Jobs with Changing Execution Characteristics,” J. Scheduling, vol. 6, no. 3, pp. 231-250, 2003.
[19] D.G. Feitelson, “Packing Schemes for Gang Scheduling,” Proc. Second Workshop Job Scheduling Strategies for Parallel Processing (JSSPP '96), pp. 89-110, 1996.
[20] D.G. Feitelson, Job Scheduling in Multiprogrammed Parallel Systems (Extended Version), IBM Research Report RC 19790 (87657), secondrevision, 1997.
[21] R.L. Graham, “Bounds on Multiprocessing Anomalies,” SIAM J.Applied Math., vol. 17, no. 2, pp. 416-429, 1969.
[22] N. Gu, “Competitive Analysis of Dynamic Processor Allocation Strategies,” master's thesis, York Univ., 1995.
[23] R.H. Halstead Jr., “Implementation of Multilisp: LISP on a Multiprocessor,” Proc. ACM Symp. Lisp and Functional Programming (LFP '84), pp. 9-17, Aug. 1984.
[24] M. Harchol-Balter, “The Effect of Heavy-Tailed Job Size. Distributions on Computer System Design,” Proc. Conf. Applications of Heavy Tailed Distributions in Economics, 1999.
[25] M. Harchol-Balter and A.B. Downey, “Exploiting Process Lifetime Distributions for Dynamic Load Balancing,” ACM Trans. Computer Systems, vol. 15, no. 3, pp. 253-285, 1997.
[26] K. Jansen and H. Zhang, “Scheduling Malleable Tasks with Precedence Constraints,” Proc. 17th ACM Symp. Parallel Algorithms and Architectures (SPAA '05), pp. 86-95, 2005.
[27] B. Kalyanasundaram and K.R. Pruhs, “Minimizing Flow Time Nonclairvoyantly,” J. ACM, vol. 50, no. 4, pp. 551-567, 2003.
[28] U. Lublin and D.G. Feitelson, “The Workload on Parallel Supercomputers: Modeling the Characteristics of Rigid Jobs,” J. Parallel and Distributed Computing, vol. 63, no. 11, pp. 1105-1122, 2003.
[29] S. Lucco, “A Dynamic Scheduling Method for Irregular Parallel Programs,” Proc. ACM Conf. Programming Language Design and Implementation (PLDI '92), pp. 200-211, 1992.
[30] C. McCann, R. Vaswani, and J. Zahorjan, “A Dynamic Processor Allocation Policy for Multiprogrammed Shared-Memory Multiprocessors,” ACM Trans. Computer Systems, vol. 11, no. 2, pp. 146-178, 1993.
[31] R. Motwani, S. Phillips, and E. Torng, “Non-Clairvoyant Scheduling,” Proc. Fourth Ann. ACM-SIAM Symp. Discete Algorithms (SODA '93), pp. 422-431, 1993.
[32] G.J. Narlikar and G.E. Blelloch, “Space-Efficient Scheduling of Nested Parallelism,” ACM Trans. Programming Languages and Systems, vol. 21, no. 1, pp. 138-173, 1999.
[33] L. Rudolph, M. Slivkin-Allalouf, and E. Upfal, “A Simple Load Balancing Scheme for Task Allocation in Parallel Machines,” Proc. Third ACM Symp. Parallel Algorithms and Architectures (SPAA '91), pp. 237-245, 1991.
[34] U. Schwiegelshohn, W. Ludwig, J.L. Wolf, J. Turek, and P.S. Yu, “Smart Bounds for Weighted Response Time Scheduling,” SIAM J.Computing, vol. 28, no. 1, pp. 237-253, 1998.
[35] S. Sen, “Dynamic Processor Allocation for Adaptively Parallel Jobs,” master's thesis, Massachusetts Inst. Tech nology, 2004.
[36] K.C. Sevcik, “Application Scheduling and Processor Allocation in Multiprogrammed Parallel Processing Systems,” Performance Evaluation, vol. 19, no. 2/3, pp. 107-140, 1994.
[37] D.B. Shmoys, J. Wein, and D.P. Williamson, “Scheduling Parallel Machines Online,” Proc. 32nd Ann. IEEE Symp. Foundations of Computer Science (FOCS '91), pp. 131-140, 1991.
[38] B. Song, “Scheduling Adaptively Parallel Jobs,” master's thesis, Massachusetts Inst. Tech nology, 1998.
[39] M.S. Squillante, “On the Benefits and Limitations of Dynamic Partitioning in Parallel Computer Systems,” Proc. Ninth IEEE Int'l Parallel Processing Symp. (IPPS '95), pp. 219-238, 1995.
[40] K. Guha and T.B. Brecht, “Using Parallel Program Characteristics in Dynamic Processor Allocation Policies,” Performance Evaluation, vol. 27/28, pp. 519-539, 1996.
[41] A. Tucker and A. Gupta, “Process Control and Scheduling Issues for Multiprogrammed Shared-Memory Multiprocessors,” Proc. 12th ACM Symp. Operating Systems Principles (SOSP '89), pp. 159-166, 1989.
[42] J. Turek, W. Ludwig, J.L. Wolf, L. Fleischer, P. Tiwari, J. Glasgow, U. Schwiegelshohn, and P.S. Yu, “Scheduling Parallelizable Tasks to Minimize Average Response Time,” Proc. Sixth ACM Symp. Parallel Algorithms and Architectures (SPAA '94), pp. 200-209, 1994.
[43] J. Zahorjan and C. McCann, “Processor Scheduling in Shared Memory Multiprocessors,” Proc. ACM SIGMETRICS '90, pp.214-225, 1990.
[44] G.K. Zipf, Human Behaviour and the Principle of Least-Effort. Addison-Wesley, 1949.
11 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool