This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
An Efficient Adaptive Scheduling Scheme for Distributed Memory Multicomputers
July 2001 (vol. 12 no. 7)
pp. 758-768

Abstract—Traditional multiprocessor scheduling schemes have been one of either space-sharing or time-sharing. Space-sharing schemes perform better than time-sharing at low to moderate system loads. However, they have a disadvantage of wasting processing power within partitions at medium to high system loads. Time sharing schemes tend to perform better at medium to high system loads. Almost all the scheduling schemes proposed so far have been tested under ad hoc workload considerations. In light of recent knowledge about workloads, it is imperative to develop an integrated scheduling scheme that combines the advantages of space- and time-sharing while overcoming their individual drawbacks. We propose such a scheduling scheme, called Hierarchical Scheduling Policy, which is efficient as well as general enough to accommodate multiple workloads. Simulation results indicate that our scheme significantly outperforms the best space- and time-sharing mechanisms at medium to high system loads even in the absence of knowledge regarding individual job characteristics.

[1] S.V. Anastasiadis, “Parallel Application Scheduling on Networks of Workstations,” Technical Report 342, Computer Systems Research Inst., Univ. of Toronto, Canada, 1996.
[2] S. Anastasiadis and K.C. Sevcik, “Parallel Application Scheduling on Networks of Workstations,” J. Parallel and Distributed Computing, vol. 43, pp. 109-124, 1997.
[3] R. Arapaci et al., “The Interaction of Parallel and Sequential Workloads on a Network of Workstations,” Tech. Report, UC Berkeley Computer Science Dept., 1994.
[4] S.L. Au and S.P. Dandamudi, “The Impact of Program Structure on the Performance of Scheduling Policies in Multiprocessor Systems,” J. Computers and Their Applications, vol. 3, no. 1, pp. 17-30, Apr. 1996.
[5] D. Bailey, J. Barton, T. Lasinski, and H. Simon, “The NAS Parallel Benchmarks,” Technical Report RNR-94-007, NASA Ames Research Center, 1994.
[6] S.-H. Chiang, R.K. Mansharamani, and M.K. Vernon, "Use of Application Characteristics and Limited Preemption for Run-to-Completion Parallel Processor Scheduling Policies," ACM SIGMETRICS, pp. 33-44, 1994.
[7] S. Dandamudi and P. Cheng, "A Hierarchical Task Queue Organization for Shared-Memory Multiprocessor Systems," IEEE Trans. Parallel and Distributed Systems, vol. 6, no. 1, pp. 1-16, Jan. 1995.
[8] S.P. Dandamudi and M. Lo, “A Hierarchical Load Sharing Policy for Distributed Systems,” Proc. IEEE MASCOTS, pp. 3-10, 1997.
[9] S.P. Dandamudi, “Reducing Run Queue Contention in Shared Memory Multiprocessors,” Computer, vol. 30, no. 3, pp. 82-89, Mar. 1997.
[10] S. Dandamudi and T.K. Thyagaraj, “A Hierarchical Scheduling Policy for Distributed-Memory Multicomputer Systems,” Proc. IEEE Int'l Conf. High Performance Computing, Bangalore, India, pp. 218-223, Dec. 1997.
[11] S.P. Dandamudi and H. Yu, “Performance of Adaptive Space Sharing Processor Allocation Policies for Distributed-Memory Multicomputers,” J. Parallel and Distributed Computing, vol. 58, pp. 109-125, 1999.
[12] H. Murase and S.K. Nayar, "Illumination Planning for Object Recognition in Structured Environments," Proc. IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition, pp. 31-38,Seattle, Washington, June 1994.
[13] D.G. Feitelson and B. Nitzberg, “Job Characteristics of a Production Parallel Scientific Workload on the NASA Ames iPSC/860,” Job Scheduling Strategies for Parallel Processing, D.G. Feitelson and L. Rudolph, eds., pp. 337-360, Springer-Verlag, 1995.
[14] D.G. Feitelson and L. Rudolph, "Distributed Hierarchical Control for Parallel Processing," Computer, vol. 23, no. 5, pp. 65-77, May 1990.
[15] S. Hotovy, “Workload Evolution on the Cornell Theory Center IBM SP2,” Job Scheduling Strategies for Parallel Processing, D.G. Feitelson and L. Rudolph, eds., pp. 27-40, Springer-Verlag, 1996.
[16] J. Jann, P. Pattnaik, H. Franke, F. Wang, J. Skovira, and J. Riordan, “Modeling of Workload in MPPs,” Proc. Third Ann. Workshop Job Scheduling Strategies for Parallel Processing, pp. 95-116, Apr. 1997.
[17] S. T. Leutenegger and M. K. Vernon,“The performance of multiprogrammed multiprocessor scheduling policies,”inProc. ACM Sigmetrics Conf., Boulder, CO, 1990, pp. 226–236.
[18] C. McCann and J. Zahorjan, "Processor Allocation Policies for Message-Passing Parallel Computers," ACM SIGMETRICS, pp. 19-32, 1994.
[19] P.K. McKinley, Y.-J. Tsai, and D. Robinson, "Collective Communication in Wormhole-routed Massively Parallel Computers," Computer, vol. 28, no. 12, pp. 39-50, Dec. 1995.
[20] J.K. Ousterhout, “Scheduling Techniques for Concurrent Systems,” Proc. Third Int'l Conf. Distributed Computing Systems, pp. 22-30, Oct. 1982.
[21] E.W. Parsons, K.C. Sevcik, “Multiprocessor Scheduling for High-Variability Service Time Distributions,” Job Scheduling Strategies for Parallel Processing, D.G. Feitelson and L. Rudolph, eds., pp. 127-145, Springer-Verlag, 1995.
[22] E. Rosti, E. Smirni, L.W. Dowdy, G. Serazzi, and B. Carlson, "Robust Partitioning Policies for Multiprocessor Systems," Performance Evaluation, vol. 19, nos. 2-3, pp. 141-165, 1994.
[23] S.K. Setia, M.S. Squillante, and S.K. Tripathi, "Processor Scheduling in Multiprogrammed, Distributed Memory Parallel Computers," ACM SIGMETRICS, pp. 158-170, 1993.
[24] K.C. Sevcik, “Application Scheduling and Processor Allocation in Multiprogrammed Parallel Processing Systems,” Performance Evaluation–An Int'l J., vol. 19, nos. 2-3, pp. 107–140, Mar. 1994.
[25] T. von Eicken et al., “Active Messages: A Mechanism for Integrated Communication and Computation,” Proc. 19th Int’l Symp. Computer Architecture, Assoc. of Computing Machinery, N.Y., May 1992, pp. 256-266.
[26] C.-S. Wu, “Processor Scheduling in Multiprogrammed Shared Memory NUMA Multiprocessors,” Technical Report 341, Computer Systems Research Inst., Univ. of Toronto, Canada, 1993.
[27] Z. Xu and K. Hwang, "Modeling Communication Overhead: MPI and MPL Performance on the IBM SP2," IEEE Parallel&Distributed Technology, Vol. 4, No. 1, Spring 1996, pp. 9-23.
[28] S. Zhou and T. Brecht,“Processor pool-based scheduling for large-scale NUMA multiprocessors,”inProc. ACM Sigmetrics Conf., 1991, pp. 133–142.

Index Terms:
Multicomputer systems, job/task scheduling, space partitioning, time sharing, hierarchical scheduling, multiple workloads, performance evaluation.
Citation:
Thyagaraj Thanalapati, Sivarama Dandamudi, "An Efficient Adaptive Scheduling Scheme for Distributed Memory Multicomputers," IEEE Transactions on Parallel and Distributed Systems, vol. 12, no. 7, pp. 758-768, July 2001, doi:10.1109/71.940749
Usage of this product signifies your acceptance of the Terms of Use.