This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Allocating Independent Subtasks on Parallel Processors
October 1985 (vol. 11 no. 10)
pp. 1001-1016
C.P. Kruskal, Department of Computer Science, University of Illinois
When using MIMD (multiple instruction, multiple data) parallel computers, one is often confronted with solving a task composed of many independent subtasks where it is necessary to synchronize the processors after all the subtasks have been completed. This paper studies how the subtasks should be allocated to the processors in order to minimize the expected time it takes to finish all the subtasks (sometimes called the makespan). We assume that the running times of the subtasks are independent, identically distributed, increasing failure rate random variables, and that assigning one or more subtasks to a processor entails some overhead, or communication time, that is independent of the number of subtasks allocated. Our analyses, which use ideas from renewal theory, reliability theory, order statistics, and the theory of large deviations, are valid for a wide class of distributions. We show that allocating an equal number of subtasks to each processor all at once has good efficiency. This appears as a consequence of a rather general theorem which shows how some consequences of the central limit theorem hold even when we cannot prove that the central limit theorem applies.
Index Terms:
scheduling, Parallel processing, performance analysis, queueing analysis
Citation:
C.P. Kruskal, A. Weiss, "Allocating Independent Subtasks on Parallel Processors," IEEE Transactions on Software Engineering, vol. 11, no. 10, pp. 1001-1016, Oct. 1985, doi:10.1109/TSE.1985.231547
Usage of this product signifies your acceptance of the Terms of Use.