
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Wei Shu, MinYou Wu, "Runtime Incremental Parallel Scheduling (RIPS) on Distributed Memory Computers," IEEE Transactions on Parallel and Distributed Systems, vol. 7, no. 6, pp. 637649, June, 1996.  
BibTex  x  
@article{ 10.1109/71.506702, author = {Wei Shu and MinYou Wu}, title = {Runtime Incremental Parallel Scheduling (RIPS) on Distributed Memory Computers}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {7}, number = {6}, issn = {10459219}, year = {1996}, pages = {637649}, doi = {http://doi.ieeecomputersociety.org/10.1109/71.506702}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Parallel and Distributed Systems TI  Runtime Incremental Parallel Scheduling (RIPS) on Distributed Memory Computers IS  6 SN  10459219 SP637 EP649 EPD  637649 A1  Wei Shu, A1  MinYou Wu, PY  1996 KW  Runtime load balancing KW  incremental scheduling KW  parallel scheduling KW  irregular and dynamic applications KW  distributed memory computers. VL  7 JA  IEEE Transactions on Parallel and Distributed Systems ER   
Abstract—Runtime Incremental Parallel Scheduling (RIPS) is an alternative strategy to the commonly used dynamic scheduling. In this scheduling strategy, the system scheduling activity alternates with the underlying computation work. RIPS utilizes the advanced parallel scheduling technique to produce a lowoverhead, highquality load balancing, as well as adapting to irregular applications. This paper presents methods for scheduling a single job on a dedicated parallel machine.
[1] J. Salmon, "Parallel Hierarchical NBody Methods," Technical Report CRPC9014, Center for Research in Parallel Computing, Caltech, 1990.
[2] K.M. Dragon and J.L. Gustafson, "A LowCost Hypercube Load Balance Algorithm," Proc. Fourth Conf. Hypercube Concurrent Computers and Applications, pp. 583590, 1989.
[3] G. Cybenko, "Dynamic Load Balancing for Distributed Memory Multiprocessors," J. Parallel and Distributed Computing, vol. 7, pp. 279301, 1989.
[4] I. Ahmad and Y.K. Kwok, "A Parallel Approach for Multiprocessor Scheduling," Proc. Int'l Parallel Processing Symp., pp. 289293, Apr. 1995.
[5] M.Y. Wu, "Parallel Incremental Scheduling," Parallel Processing Letters, 1995.
[6] M.Y. Wu and D.D. Gajski,"Hypertool: A programming aid for messagepassing systems," IEEE Transactions on Parallel and Distributed Systems, vol. 1, no. 3, pp. 330343, July 1990.
[7] H.E. Rewini and T.G. Lewis,"Scheduling parallel program tasks onto arbitrary target machines," J. Parallel and Distributed Computing, vol. 9, pp. 138153, 1990.
[8] T. Yang and A. Gerasoulis, “PYRROS: Static Scheduling and Code Generation for Message Passing Multiprocessors,” Proc. Sixth ACM Int'l Conf. Supercomputing, pp. 428437, 1992.
[9] Y.C. Chung and S. Ranka,"Applications and performance analysis of a compiletime optimization approach for list scheduling algorithms on distributed memory multiprocessors," Proc. Supercomputing '92, pp. 512521, 1992.
[10] I. Ahmad, Y. Kwok, and M. Wu, "Performance Comparison of Algorithms for Static Scheduling of DAGs to Multiprocessors," Proc. Second Autralasian Conf. Parallel and RealTime Systems, Sept. 1995.
[11] N.G. Shivaratri, P. Krueger, and M. Singhal, “Load Distributing for Locally Distributed Systems,” Computer, vol. 25, no. 12, pp. 3344, Dec. 1992.
[12] D.L. Eager, E.D. Lazowska, and J. Zahorjan, "Adaptive Load Sharing in Homogeneous Distributed Systems," IEEE Trans. Software Eng., vol. 12, no. 5, pp. 662675, May 1986.
[13] D.L. Eager, E.D. Lazowska, and J. Zahorjan, "A Comparison of ReceiverInitiated and SenderInitiated Adaptive Load Sharing," Performance Evaluation, Vol. 6, Mar. 1986, pp. 5368.
[14] J.A. Stankovic, “Simulations of Three Adaptive, Decentralized Controlled, Job Scheduling Algorithms,” Computer Networks, vol. 8, pp. 199217, 1984.
[15] T.L. Casavant and J.G. Kuhl, "Analysis of Three Dynamic Distributed LoadBalancing Strategies with Varying Global Information Requirements," Proc. Int'l Conf. Distributed Computing Systems, pp. 185192, May 1987.
[16] Y.T. Wang and R.J.T. Morris, "Load Sharing in Distributed Systems," IEEE Trans. Computers, vol. 34, no. 3, pp. 204217, Mar. 1985.
[17] Z. Lin, "A Distributed Fair Polling Scheme Applied to Parallel Logic Programming," Int'l J. Parallel Programming, vol. 20, Aug. 1991.
[18] W. Shu, "Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers," Scientific Programming, vol. 3, pp. 341352, 1994.
[19] M.Y. Wu, "Symmetrical Hopping: A Scalable Scheduling Algorithm on Distributed Memory Machines," Concurrency: Practice and Experience, 1995.
[20] M. WillebeckLeMair and A. Reeves, “Strategies for Dynamic Load Balancing on Highly Parallel Computers,” IEEE Trans. Parallel and Distributed Systems, vol. 4, no. 9, pp. 979993, Sept. 1993.
[21] F.C.H. Lin and R.M. Keller, “The Gradient Model Load Balancing Method,” IEEE Trans. Software Eng., vol. 13, no. 1, pp. 3238, Jan. 1987.
[22] W.C. Athas, "Fine Grain Concurrent Computations," PhD thesis, Dept. of Computer Science, California Inst. of Tech nology, May 1987.
[23] R.M. Karp and Y. Zhang, "A Randomized Parallel BranchandBound Procedure," J. ACM, vol. 40, pp. 765789, 1993.
[24] S. Chakrabarti, A. Ranade, and K. Yelick, “Randomized Load Balancing for TreeStructured Computation,” Proc. Scalable High Performance Computing Conf., pp. 666673, 1994.
[25] G. Fox,M. Johnson,G. Lyzenga,S. Otto,J. Salmon,, and D. Walker,Solving Problems on Concurrent Processors, Vol. I: General Techniques andRegular Problems.Englewood Cliffs, N.J.: Prentice Hall 1988.
[26] M.J. Berger and S.H. Bokhari, "A partitioning strategy for nonuniform problems on multiprocessors," IEEE Trans. Computers, vol. 36, no. 5, pp. 570580, May 1987.
[27] S.B. Baden, "Dynamic Load Balancing of a Vortex Calculation Running on Multiprocessors," Technical Report vol. 22584, Lawrence Berkeley Lab., 1986.
[28] J. Saltz, R. Mirchandaney, R. Smith, D. Nicol, and K. Crowley, "The PARTY Parallel RunTime System," Proc. SIAM Conf. Parallel Processing for Scientific Computing, 1987.
[29] G.C. Sih and E.A. Lee, “A CompileTime Scheduling Heuristic for InterconnectionConstrained Heterogeneous Processor Architectures,” IEEE Trans. Parallel and Distributed Systems, vol. 4, no. 2, pp. 175186, Feb. 1993.
[30] H. Shen, "SelfAdjusting Mapping: A Heuristic Mapping Algorithm for Mapping Parallel Programs onto Transputer Architectures," The Computer J., vol. 35, pp. 7180, Feb. 1992.
[31] A. Kavianpour, "Systematic Approach for Mapping Application Tasks in Hypercubes," IEEE Trans. Computers, vol. 42, no. 6, pp. 742746, June 1993.
[32] C. Yu and C.R. Das, "Disjoint Task Allocation Algorithm for MIN Machines with Minimal Conflicts," IEEE Trans. Parallel and Distributed Systems, vol. 6, no. 4, pp. 373387, Apr. 1995.
[33] R. Koeninger, M. Furtney, and M. Walker, "A Shared Memory MPP from Cray Research," Digital Technical J., vol. 6, no. 2, pp. 821, 1994.
[34] M.Y. Wu, "On Runtime Parallel Scheduling," Technical Report 9534, Dept. of Computer Science, State Univ. of New York at Buffalo, Apr. 1995.
[35] R.E. Korf, "Depthfirst IterativeDeepening: An Optimal Admissible Tree Search," Artificial Intelligence, vol. 27, no. 1, pp. 97109, 1985.
[36] V.N. Rao and V. Kumar, "Parallel DepthFirst Search, Part I: Implementation," Int'l J. Parallel Programming, vol. 16, no. 6, pp. 479499, 1987.
[37] W.F. van Gunsteren and H.J.C. Berendsen, "GROMOS: GROningen MOlecular Simulation Sofware," technical report, Laboratory of Physical Chemistry, Univ. of Groningen, Nijenborgh, The Netherlands, 1988.
[38] R.v. Hanxleden and K. Kennedy, "Relaxing SIMD Control Flow Constraints Using Loop Transformations," Technical Report CRPCTR92207, Center for Research on Parallel Computation, Rice Univ., Apr. 1992.
[39] J. Shen and J.A. McCammon, "Molecular Dynamics Simulation of Superoxide Interacting with Superoxide Dismutase," Chemical Physics, vol. 158, pp. 191198, 1991.
[40] M. WillebeekLeMair personal communication, 1995.