This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Adaptive Parallel Job Scheduling with Flexible Coscheduling
November 2005 (vol. 16 no. 11)
pp. 1066-1077

Abstract—Many scientific and high-performance computing applications consist of multiple processes running on different processors that communicate frequently. Because of their synchronization needs, these applications can suffer severe performance penalties if their processes are not all coscheduled to run together. Two common approaches to coscheduling jobs are batch scheduling, wherein nodes are dedicated for the duration of the run, and gang scheduling, wherein time slicing is coordinated across processors. Both work well when jobs are load-balanced and make use of the entire parallel machine. However, these conditions are rarely met and most realistic workloads consequently suffer from both internal and external fragmentation, in which resources and processors are left idle because jobs cannot be packed with perfect efficiency. This situation leads to reduced utilization and suboptimal performance. Flexible CoScheduling (FCS) addresses this problem by monitoring each job's computation granularity and communication pattern and scheduling jobs based on their synchronization and load-balancing requirements. In particular, jobs that do not require stringent synchronization are identified, and are not coscheduled; instead, these processes are used to reduce fragmentation. FCS has been fully implemented on top of the STORM resource manager on a 256-processor Alpha cluster and compared to batch, gang, and implicit coscheduling algorithms. This paper describes in detail the implementation of FCS and its performance evaluation with a variety of workloads, including large-scale benchmarks, scientific applications, and dynamic workloads. The experimental results show that FCS saturates at higher loads than other algorithms (up to 54 percent higher in some cases), and displays lower response times and slowdown than the other algorithms in nearly all scenarios.

[1] C. Anglano, “A Comparative Evaluation of Implicit Coscheduling Strategies for Networks of Workstations,” Proc. Ninth Int'l Symp. High Performance Distributed Computing, Aug. 2000.
[2] C.D. Antonopoulos, D.S. Nikolopoulos, and T.S. Papatheodorou, “Informing Algorithms for Efficient Scheduling of Synchronizing Threads on Multiprogrammed SMPs,” Proc. Int'l Conf. Parallel Processing, pp. 123-130, Sept. 2001.
[3] A.C. Arpaci-Dusseau, “Implicit Coscheduling: Coordinated Scheduling with Implicit Information in Distributed Systems,” ACM Trans. Computer Systems, vol. 19, no. 3, pp. 283-331, Aug. 2001.
[4] G.S. Choi, J.-H. Kim, D. Ersoz, A.B. Yoo, and C.R. Das, “Coscheduling in Clusters: Is It a Viable Alternative?” Proc. Supercomputing Conf. 2004, Nov. 2004.
[5] D.E. Culler and J.P. Singh, Parallel Computer Architecture: A Hardware/Software Approach. Morgan Kaufman Publishers, Inc., 1999.
[6] D.G. Feitelson and L. Rudolph, “Metrics and Benchmarking for Parallel Job Scheduling,” Job Scheduling Strategies for Parallel Processing, D.G. Feitelson and L. Rudolph, eds., pp. 1-24, 1998.
[7] E. Frachtenberg, D.G. Feitelson, J. Fernandez-Peinador, and F. Petrini, “Parallel Job Scheduling under Dynamic Workloads,” Proc. Ninth Workshop Job Scheduling Strategies for Parallel Processing, June 2003.
[8] E. Frachtenberg, D.G. Feitelson, F. Petrini, and J. Fernandez, “Flexible CoScheduling: Dealing with Load Imbalance and Heterogeneous Resources,” Proc. Int'l Parallel and Distributed Systems Processing Symp., Apr. 2003.
[9] E. Frachtenberg, F. Petrini, J. Fernandez, S. Pakin, and S. Coll, “STORM: Lightning-Fast Resource Management,” Proc. Supercomputing Conf. 2002, Nov. 2002.
[10] A. Hoisie, O. Lubeck, and H. Wasserman, “Scalability Analysis of Multidimensional Wavefront Algorithms on Large-Scale SMP Clusters,” Proc. Symp. Frontiers of Massively Parallel Computation, Feb. 1999.
[11] D. Kerbyson, H. Alme, A. Hoisie, F. Petrini, H. Wasserman, and M. Gittings, “Predictive Performance and Scalability Modeling of a Large-Scale Application,” Proc. Supercomputing Conf. 2001, Nov. 2001.
[12] R. Kettimuthu, V. Subramani, S. Srinivasan, T.B. Gopalsamy, D.K. Panda, and P. Sadayappan, “Selective Preemption Strategies for Parallel Job Scheduling,” Proc. Int'l Conf. Parallel Processing, Aug. 2002.
[13] J. Kim and D.J. Lilja, “Characterization of Communication Patterns in Message-Passing Parallel Scientific Application Programs,” Proc. Workshop Comm., Architecture, and Applications for Network-Based Parallel Computing, pp. 202-216, Feb. 1998.
[14] W. Lee, M. Frank, V. Lee, K. Mackenzie, and L. Rudolph, “Implications of I/O for Gang Scheduled Workloads,” Job Scheduling Strategies for Parallel Processing, D.G. Feitelson and L. Rudolph, eds., pp. 215-237, 1997.
[15] U. Lublin and D.G. Feitelson, “The Workload on Parallel Supercomputers: Modeling the Characteristics of Rigid Jobs,” J. Parallel and Distributed Computing, vol. 63, no. 11, pp. 1105-1122, Nov. 2003.
[16] A.W. Mu'alem and D.G. Feitelson, “Utilization, Predictability, Workloads, and User Runtime Estimates in Scheduling the IBM SP2 with Backfilling,” IEEE Trans. Parallel and Distributed Systems, vol. 12, no. 6, pp. 529-543, June 2001.
[17] S. Nagar, A. Banerjee, A. Sivasubramaniam, and C.R. Das, “A Closer Look at Coscheduling Approaches for a Network of Workstations,” Proc. ACM Symp. Parallel Algorithms and Architectures, pp. 96-105, June 1999.
[18] D.S. Nikolopoulos and C.D. Polychronopoulos, “Adaptive Scheduling under Memory Constraints on Non-Dedicated Computational Farms,” Future Generation Computer Systems, vol. 19, no. 4, pp. 505-519, May 2003.
[19] F. Petrini, W.C. Feng, A. Hoisie, S. Coll, and E. Frachtenberg, “The Quadrics Network: High Performance Clustering Technology,” IEEE Micro, vol. 22, no. 1, pp. 46-57, Jan.-Feb. 2002.
[20] F. Petrini, D. Kerbyson, and S. Pakin, “The Case of the Missing Supercomputer Performance: Achieving Optimal Performance on the 8,192 Processors of ASCI Q,” Proc. Supercomputing Conf., Nov. 2003.
[21] P. Sobalvarro, S. Pakin, W.E. Weihl, and A.A. Chien, “Dynamic Coscheduling on Workstation Clusters,” Job Scheduling Strategies for Parallel Processing, D.G. Feitelson and L. Rudolph, eds., pp. 231-256, 1998.
[22] S. Srinivasan, R. Kettimuthu, V. Subramani, and P. Sadayappan, “Selective Reservation Strategies for Backfill Job Scheduling,” Job Scheduling Strategies for Parallel Processing, D.G. Feitelson, L. Rudolph, and U. Schwiegelshohn, eds., pp. 55-71, 2002.
[23] L.G. Valiant, “A Bridging Model for Parallel Computation,” Comm. ACM, vol. 33, no. 8, pp. 103-111, Aug. 1990.
[24] Y. Wiseman and D.G. Feitelson, “Paired Gang Scheduling,” IEEE Trans. Parallel and Distributed Systems, vol. 14, no. 6, pp. 581-592, June 2003.
[25] “ASCI Technology Prospectus: Simulation and Computational Science,” Technical Report DOE/DP/ASC-ATP-001, Nat'l Nuclear Security Agency, July 2001.

Index Terms:
Cluster computing, load balancing, job scheduling, gang scheduling, parallel architectures, flexible coscheduling.
Citation:
Eitan Frachtenberg, Dror G. Feitelson, Fabrizio Petrini, Juan Fern?ndez, "Adaptive Parallel Job Scheduling with Flexible Coscheduling," IEEE Transactions on Parallel and Distributed Systems, vol. 16, no. 11, pp. 1066-1077, Nov. 2005, doi:10.1109/TPDS.2005.130
Usage of this product signifies your acceptance of the Terms of Use.