This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
On Distributed Computations with Limited Resources
May 1987 (vol. 36 no. 5)
pp. 517-528
H.R. Kanakia, Computer Systems Laboratory, Department of Electrical Engineering, Stanford University
We consider two styles of executing a single job or an algorithm: either the job is subdivided into tasks, each of which is executed. on a separate processor, or the entire job is executed on a single processor, that has the same capacity as the sum of the processors in the earlier case. The algorithm is abstracted as consisting of a number of tasks with dependencies among them. Our model of dependencies among tasks allows sequential execution, parallel execution, synchronization, and spawning of tasks. The model assumes that the dependencies are known before the job begins, and a task in not preempted after its execution begins. With the usual assumptions such as exponential distribution of task execution times, and Poisson arrival of input data, we are able to show that the centralized execution completes the job faster than the decentralized execution only for a certain range of parameters of algorithms. We also give counterexamples that show that, contrary to popular belief, the reverse is true for some values of parameters of algorithms.
Index Terms:
theory of distributed algorithms, Comparison of distributed versus centralized execution of algorithms, distributed algorithms, performance analysis
Citation:
H.R. Kanakia, F.A. Tobagi, "On Distributed Computations with Limited Resources," IEEE Transactions on Computers, vol. 36, no. 5, pp. 517-528, May 1987, doi:10.1109/TC.1987.1676936
Usage of this product signifies your acceptance of the Terms of Use.