The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.10 - October (1985 vol.11)
pp: 1001-1016
C.P. Kruskal , Department of Computer Science, University of Illinois
ABSTRACT
When using MIMD (multiple instruction, multiple data) parallel computers, one is often confronted with solving a task composed of many independent subtasks where it is necessary to synchronize the processors after all the subtasks have been completed. This paper studies how the subtasks should be allocated to the processors in order to minimize the expected time it takes to finish all the subtasks (sometimes called the makespan). We assume that the running times of the subtasks are independent, identically distributed, increasing failure rate random variables, and that assigning one or more subtasks to a processor entails some overhead, or communication time, that is independent of the number of subtasks allocated. Our analyses, which use ideas from renewal theory, reliability theory, order statistics, and the theory of large deviations, are valid for a wide class of distributions. We show that allocating an equal number of subtasks to each processor all at once has good efficiency. This appears as a consequence of a rather general theorem which shows how some consequences of the central limit theorem hold even when we cannot prove that the central limit theorem applies.
INDEX TERMS
scheduling, Parallel processing, performance analysis, queueing analysis
CITATION
C.P. Kruskal, A. Weiss, "Allocating Independent Subtasks on Parallel Processors", IEEE Transactions on Software Engineering, vol.11, no. 10, pp. 1001-1016, October 1985, doi:10.1109/TSE.1985.231547
24 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool