This Article 
 Bibliographic References 
 Add to: 
Isoefficiency Maps for Divisible Computations
June 2010 (vol. 21 no. 6)
pp. 872-880
Maciej Drozdowski, Poznan University of Technology, Poznan
Lukasz Wielebski, Poznan University of Technology, Poznan
In this paper, we propose a new technique of presenting performance relationships in parallel processing. Performance of parallel processing is a hard matter with many counterintuitive phenomena. It is relatively easy to obtain some numerical indicators of the performance using various performance models. However, it is far more difficult to comprehend the nature of the analyzed problem. To facilitate understanding the performance relationships, we propose a new visualization technique based on the concept of isoefficiency. In this paper, isoefficiency is represented as a relation on points in the space of system parameters for which efficiency of parallel processing is equal. We visualize this relation on two-dimensional maps analogously to isobars and isotherms on weather maps. This concept is applied to depict the performance relationships in two standard performance laws: Amdahl's speedup law and Gustafson's speedup law. Then, we use isoefficiency maps to analyze the performance relationships in divisible load processing. Divisible load model conforms with data-parallel computations in an environment with communication delays. The results we obtain give interesting insights into relationships existing in parallel processing.

[1] R. Agrawal and H.V. Jagadish, "Partitioning Techniques for Large-Grained Parallelism," IEEE Trans. Computers, vol. 37, no. 12, pp. 1627-1634, Dec. 1988.
[2] S.G. Akl, The Design and Analysis of Parallel Algorithms. Prentice-Hall Int'l, Inc., 1989.
[3] G.M. Amdahl, "Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities," Proc. Am. Federation of Information Processing Soc. Conf. (AFIPS), vol. 30, pp. 483-485, Apr. 1967.
[4] G. Barlas, "Collection-Aware Optimum Sequencing of Operations and Closed-Form Solutions for the Distribution of a Divisible Load on Arbitrary Processor Trees," IEEE Trans. Parallel and Distributed Systems, vol. 9, no. 5, pp. 429-441, May 1998.
[5] O. Beaumont, H. Casanova, A. Legrand, Y. Robert, and Y. Yang, "Scheduling Divisible Loads on Star and Tree Networks: Results and Open Problems," IEEE Trans. Parallel and Distributed Systems, vol. 16, no. 3, pp. 207-218, Mar. 2005.
[6] V. Bharadwaj, D. Ghose, and T. Robertazzi, "Divisible Load Theory: A New Paradigm for Load Scheduling in Distributed Systems," Cluster Computing, vol. 6, no. 1, pp. 7-17, 2003.
[7] V. Bharadwaj, D. Ghose, V. Mani, and T. Robertazzi, Scheduling Divisible Loads in Parallel and Distributed Systems. IEEE CS Press, 1996.
[8] J. Błażewicz, M. Drozdowski, and M. Markiewicz, "Divisible Task Scheduling—Concept and Verification," Parallel Computing, vol. 25, no. 1, pp. 87-98, 1999.
[9] Y.-C. Cheng and T.G. Robertazzi, "Distributed Computation with Communication Delay," IEEE Trans. Aerospace and Electronic Systems, vol. 24, no. 6, pp. 700-712, Nov. 1988.
[10] N. Comino and V.L. Narasimhan, "A Novel Data Distribution Technique for Host-Client Type Parallel Applications," IEEE Trans. Parallel and Distributed Systems, vol. 13, no. 2, pp. 97-110, Feb. 2002.
[11] J. Dongarra and W. Gentzsch, Computer Benchmarks. North-Holland, 1993.
[12] M. Drozdowski and P. Wolniewicz, "Experiments with Scheduling Divisible Tasks in Clusters of Workstations," Proc. Euro-Par '00, A. Bode, T. Ludwig, W. Karl, and R. Wismüller, eds., pp. 311-319, 2000.
[13] M. Drozdowski and L. Wielebski, "Efficiency of Divisible Load Processing," Proc. Int'l Conf. Parallel Processing and Applied Math. (PPAM '03), R. Wyrzykowski, J. Dongarra, M. Paprzycki, and J. Wasniewski, eds., pp. 175-180, 2004.
[14] D. Feitelson, "Performance Evaluation Links," http://www. , Mar. 2009.
[15] A. Grama, A. Gupta, and V. Kumar, "Isoefficiency: Measuring the Scalability of Parallel Algorithms and Architectures," IEEE Parallel and Distributed Technology, vol. 1, no. 3, pp. 12-21, Aug. 1993.
[16] J.L. Gustafson, "Reevaluating Amdahl's Law," Comm. ACM, vol. 31, no. 5, pp. 532-533, 1988.
[17] R.W. Hockney, The Science of Computer Benchmarking. SIAM, 1996.
[18] V. Kumar and V.N. Rao, "Parallel Depth First Search. Part II. Analysis," Int'l J. Parallel Programming, vol. 16, no. 6, pp. 501-519, 1987.
[19] R. Jain, The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling. John Wiley & Sons, 1991.
[20] P. Li, B. Veeravalli, and A.A. Kassim, "Design and Implementation of Parallel Video Encoding Strategies Using Divisible Load Analysis," IEEE Trans. Circuits and Systems for Video Technology, vol. 15, no. 9, pp. 1098-1112, Sept. 2005.
[21] K. van der Raadt, Y. Yang, and H. Casanova, "Practical Divisible Load Scheduling on Grid Platforms with APST-DV," Proc. 19th IEEE Int'l Parallel and Distributed Processing Symp. (IPDPS '05), p. 29b, 2005.
[22] T.G. Robertazzi, "Ten Reasons to Use Divisible Load Theory," Computer, vol. 36, no. 5, pp. 63-68, May 2003.
[23] Y. Yang, H. Casanova, M. Drozdowski, M. Lawenda, and A. Legrand, "On the Complexity of Multi-Round Divisible Load Scheduling," Research Report 6096, INRIA Rhône-Alpes,, 2007.

Index Terms:
Performance evaluation, scheduling, divisible load theory, isoefficiency.
Maciej Drozdowski, Lukasz Wielebski, "Isoefficiency Maps for Divisible Computations," IEEE Transactions on Parallel and Distributed Systems, vol. 21, no. 6, pp. 872-880, June 2010, doi:10.1109/TPDS.2009.128
Usage of this product signifies your acceptance of the Terms of Use.