This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Space Efficient Execution of Deterministic Parallel Programs
November/December 1999 (vol. 25 no. 6)
pp. 870-882

Abstract—We model a deterministic parallel program by a directed acyclic graph of tasks, where a task can execute as soon as all tasks preceding it have been executed. Each task can allocate or release an arbitrary amount of memory (i.e., heap memory allocation can be modeled). We call a parallel schedule "space efficient" if the amount of memory required is at most equal to the number of processors times the amount of memory required for some depth-first execution of the program by a single processor. We will describe a simple, locally depth-first, scheduling algorithm and show that it is always space efficient. Since the scheduling algorithm is greedy, it will be within a factor of two of being optimal with respect to time. For the special case of a program having a series-parallel structure, we show how to efficiently compute the worst case memory requirements over all possible depth-first executions of a program. Finally, we show how scheduling can be decentralized, making the approach scalable to a large number of processors when there is sufficient parallelism.

[1] N.S. Arora, R.D. Blumofe, and C.G. Plaxton, "Thread Scheduling for Multiprogrammed Multiprocessors," 1998, to appear.
[2] G. Blelloch, P. Gibbons, and Y. Matias, "Provably Efficient Scheduling for Languages with Fine-Grained Parallelism," Proc. Symp. Parallel Algorithms and Architectures, pp. 1-12, July 1995.
[3] G.E. Blelloch, P.B. Gibbons, Y. Matias, and G.J. Narlikar, "Space-Efficient Scheduling of Parallelism with Synchronization Variables," ACM Symp. Parallel Algorithm and Architectures, June 1997.
[4] R.D. Blumofe, "Executing Multithreaded Programs Efficiently," PhD thesis, Dept. of Electrical Eng. and Computer Science, MIT, Sept. 1995.
[5] R.D. Blumofe and C.E. Leiserson, "Space-Efficient Scheduling of Multithreaded Computations," Proc. 25th Ann. ACM Symp. Theory of Computing, pp. 362-371,San Diego Calif., May 1993.
[6] R.D. Blumofe and C.E. Leiserson, "Scheduling Multithreaded Computations by Work Stealing," Proc. 35th Symp. Foundations of Computer Science, 1994.
[7] R.D. Blumofe and D. Papadopoulos, "The Performance of Work Stealing in Multiprogrammed Environments," 1998, to appear.
[8] F.W. Burton, "Storage Management in Virtual Tree Machines," IEEE Trans. Computers, vol. 37, no. 3, Mar. 1988.
[9] F.W. Burton, "Guaranteeing Good Space Bounds for Parallel Programs," IEEE Trans. Software Eng., vol. 22, no. 10, Oct. 1996.
[10] F.W. Burton, G. McKeown, V.J. Rayward-Smith, and D.J. Simpson, "Bounding the Worst-Case Memory Requirement for Garbage Collection in Series-Parallel Programs," to appear.
[11] F.W. Burton and V.J. Rayward-Smith, "Worst Case Scheduling for Parallel Functional Programs," J. Functional Programming, vol. 4, no. 1, pp. 65-75, Jan. 1994.
[12] F.W. Burton and M.R. Sleep, "Executing Functional Programs on a Virtual Tree of Processors," Proc. Conf. Functional Programming Languages and Computer Architecture, pp. 187-194,Portsmouth N.H., Oct. 1981.
[13] D.L. Eager, J. Zahorian, and E.D. Lazowska, "Speedup versus Efficiency in Parallel Systems," IEEE Trans. Computers, vol. 38, no. 3, pp. 408-423, Mar. 1989.

Index Terms:
Memory management, scheduling, worst case performance, parallel programming, memory bounds, shared memory.
Citation:
David J. Simpson, F. Warren Burton, "Space Efficient Execution of Deterministic Parallel Programs," IEEE Transactions on Software Engineering, vol. 25, no. 6, pp. 870-882, Nov.-Dec. 1999, doi:10.1109/32.824415
Usage of this product signifies your acceptance of the Terms of Use.