This Article 
 Bibliographic References 
 Add to: 
Evaluating a High-Level Parallel Language (GpH) for Computational GRIDs
February 2008 (vol. 19 no. 2)
pp. 219-233
Computational Grids potentially offer low cost, readily available, and large-scale high-performance platforms. For the parallel execution of programs, however, computational GRIDs pose serious challenges: they are heterogeneous, and have hierarchical and often shared interconnects, with high and variable latencies between clusters. This paper investigates whether a programming language with high-level parallel coordination and a Distributed Shared Memory model (DSM) can deliver good, and scalable, performance on a range of computational GRID configurations. The high-level language, Glasgow parallel Haskell (GpH), abstracts over the architectural complexities of the computational GRID, and we have developed GRID-GUM2, a sophisticated grid-specific implementation of GpH, to produce the first high-level DSM parallel language implementation for computational GRIDs. We report a systematic performance evaluation of GRIDGUM2 on combinations of high/low and homo/hetero-geneous computational GRIDs.We measure the performance of a small set of kernel parallel programs representing a variety of application areas, two parallel paradigms, and ranges of communication degree and parallel irregularity. We investigate GRID-GUM2's performance scalability on medium-scale heterogeneous and high-latency computational GRIDs, and analyse the performance with respect to the program characteristics of

[1] I. Foster and C. Kesselman, “Computational Grids,” The Grid: Blueprint for a Future Computing Infrastructure, 1998.
[2] J. Basney and M. Livny, “Deploying a High Throughput Computing Cluster,” High Performance Cluster Computing, vol. 1, Prentice Hall, 1999.
[3] S. Zhou, X. Zheng, J. Wang, and P. Delisle, “Utopia: A Load Sharing Facility for Large, Heterogeneous Distributed Computer Systems,” Software—Practice and Experience, vol. 23, no. 12, pp.1305-1336, 1993.
[4] “MPI: A Message Passing Interface Standard,” Int'l J. Supercomputer Application, vol. 8, nos. 3-4, pp. 165-414, 1994.
[5] M. Alt, H. Bischof, and S. Gorlatch, “Program Development for Computational Grids Using Skeletons and Performance Prediction,” Proc. Third Int'l Workshop Constructive Methods for Parallel Programming (CMPP '02), June 2002.
[6] P. Trinder, K. Hammond, H.-W. Loidl, and S. Peyton Jones, “${\rm Algorithm} + {\rm Strategy} = {\rm Parallelism}$ ,” J. Functional Programming, vol. 8, no. 1, pp. 23-60,, Jan. 1998.
[7] H.-W. Loidl, F. Rubio Diez, N. Scaife, K. Hammond, U. Klusik, R. Loogen, G. Michaelson, S. Horiguchi, R. Pena Mari, S. Priebe, A. Rebon Portillo, and P. Trinder, “Comparing Parallel Functional Languages: Programming and Performance,” Higher-Order and Symbolic Computation, vol. 16, no. 3, pp. 203-251, 2003.
[8] A. Al Zain, P. Trinder, H.-W. Loidl, and G. Michaelson, “Managing Heterogeneity in a Grid Parallel Haskell,” J. Scalable Computing: Practice and Experience, vol. 6, no. 4, 2006.
[9] R. Loogen, “Programming Language Constructs,” Research Directions in Parallel Functional Programming, K. Hammond and G.Michaelson, eds. Springer-Verlag, pp. 63-91, 1999.
[10] A. Geist, A. Beguelin, J. Dongerra, W. Jiang, R. Manchek, and V. Sunderam, PVM: Parallel Virtual Machine. MIT Press, 1994.
[11] D.B. Loveman, “High Performance Fortran,” IEEE Parallel and Distributed Technology, vol. 1, no. 1, pp. 25-42, 1993.
[12] G. Michaelson, N. Scaife, P. Bristow, and P. King, “Nested Algorithmic Skeletons from Higher Order Functions,” Parallel Algorithms and Applications, vol. 16, pp. 181-206, 2001.
[13] P. Trinder, K. Hammond, J. Mattson Jr., A. Partridge, and S. Peyton Jones, “GUM: A Portable Parallel Implementation of Haskell,” Proc. ACM Conf. Programming Languages Design and Implementation (PLDI '96), pp. 79-88, http://www.macs.hw., May 1996.
[14] S. Breitinger, R. Loogen, Y. Ortega Malln, and R. Peña Marí, “Eden: The Paradise of Functional Concurrent Programming,” Proc. European Conf. Parallel Processing (EuroPar '96), pp. 710-713, 1996.
[15] The Grid: Blueprint for a New Computing Infrastructure, I. Foster and C. Kesselman, eds., Morgan Kaufmann, 1999.
[16] Globus, http://www.globus.orgtoolkit/, 2005.
[17] A. Grimshaw and W. Wulf, “The Legion Vision of a World-Wide Virtual Computer,” Comm. ACM, vol. 40, no. 1, pp. 39-45, 1997.
[18] F. Berman, G. Fox, and T. Hey, “The Grid: Past, Present, Future,” Grid Computing: Making the Global Infrastructure a Reality, F.Berman, G. Fox, and A. Hey, eds., John Wiley & Sons, pp. 9-50, 2003.
[19] D. Jackson, “Advanced Scheduling of Linux Clusters Using Maui,” Proc. Usenix Ann. Technical Conf. (Usenix '99), 1999.
[20] E. Smirni and E. Rosti, “Modelling Speedup of SPMD Applications on the Intel Paragon: A Case Study,” Proc. Int'l Conf. and Exhibition High-Performance Computing and Networks, Languages and Computer Architecture (HPCN '95), 1995.
[21] L. Valiant, “A Bridging Model for Parallel Computation,” Comm. ACM, vol. 33, no. 8, p. 103, Aug. 1990.
[22] M. Beck, J. Dongarra, G. Fagg, A. Geist, P. Gray, M. Kohl, J. Migliardi, K. Moore, T. Moore, P. Papadopoulos, S. Scott, and V. Sunderam, “HARNESS: A Next Generation Distributed Virtual Machine,” Future Generation Computer Systems, special issue on metacomputing, vol. 15, nos. 5-6, pp. 571-582, Oct. 1999.
[23] B.-Y. Evan Chang, K. Crary, M. DeLap, R. Harper, J. Liszka, T. Murphy VII, and F. Pfenning, “Trustless Grid Computing in ConCert,” Proc. Third Int'l Workshop Grid Computing (GRID '02), 2002.
[24] C. Baker-Finch, D. King, J. Hall, and P. Trinder, “An Operational Semantics for Parallel Lazy Evaluation,” Proc. Fifth Int'l Conf. Functional Programming (ICFP '00), pp. 162-173, Sept. 2000.
[25] T. Murphy VII, K. Crary, and R. Harper, “Distributed Control Flow with Classical Modal Logic,” Proc. 19th Int'l Workshop Computer Science Logic (CSL '05), pp. 51-69, July 2005.
[26] R. Whaley, A. Petitet, and J. Dongarra, “Automated Empirical Optimisations of Software and the ATLAS Project,” Parallel Computing, vol. 27, pp. 3-35, 2001.
[27] Distributed Shared Memory Home Pages, http://www.ics.uci. edu/javiddsm.html/, 2006.
[28] C. Morin, P. Gallard, R. Lottiaux, and G. Valle, “Design and Implementations of NINF: Towards a Global Computing Infrastructure,” Future Generation Computer Systems, vol. 20, no. 2, 2004.
[29] Y. Hu, H. Lu, A. Cox, and W. Zwaenepoel, “OpenMP for Networks of SMPs,” J. Parallel and Distributed Computing, vol. 60, no. 12, pp. 1512-1530, 2000.
[30] T.-Y. Liang, C.-Y. Wu, J.-B. Chang, and C.-K. Shieh, “Teamster-G: A Grid-Enabled Software DSM System,” Proc. Fifth IEEE Symp. Cluster Computing and the Grid (CCGrid '05), pp. 905-912, 2005.
[31] M. Aldinucci, M. Coppola, M. Danelutto, M. Vanneschi, and C. Zoccolo, “ASSIST as a Research Framework for High-Performance Grid Programming Environments,” Grid Computing: Software Environments and Tools, J.C. Cunha and O.F. Rana, eds., Springer, Jan. 2006.
[32] F. Berman, A. Chien, J. Cooper, K. Dongarra, I. Foster, D. Gannon, L. Johnsson, K. Kennedy, C. Kesselman, J. Mellor-Crummey, D. Reed, and L.W.R. Torczon, “The GrADS Project: Software Support for High-Level Grid Application Development,” Int'l J. High-Performance Computing Applications, vol. 15, no. 4, pp. 327-344, 2001.
[33] M. Aldinucci, M. Danelutto, and J. Dünnweber, “Optimization Techniques for Implementing Parallel Sckeletons in Grid Environments,” Proc. Fourth Int'l Workshop Constructive Methods for Parallel Programming (CMPP '04), July 2004.
[34] M. Aldinucci and M. Danelutto, “Advanced Skeleton Programming Systems,” Parallel Computing, aldinucpapers.html , 2006.
[35] M. Cole, “Bringing Skeletons Out of the Closet: A Pragmatic Manifesto for Skeletal Parallel Programming,” Parallel Computing, vol. 30, no. 3, pp. 389-406, 2004.
[36] R.V. van Nieuwpoort, J. Maassen, G. Wrzesinska, R. Hofman, C. Jacobs, T. Kielmann, and H.E. Bal, “Ibis: A Flexible and Efficient Java Based Grid Programming Environment,” Concurrency and Computation: Practice and Experience, vol. 17, nos. 7-8, pp. 1079-1107, June 2005.
[37] J. Dünnweber, M. Alt, and S. Gorlatch, “Apis for Grid Programming Using Higher Order Components,” Proc. 12th Global Grid Forum (GGF '04), janadgggf04.html, Sept. 2004.
[38] M. Alt and S. Gorlatch, “Adapting Java RMI for Grid Computing,” Future Generation Computer Systems, vol. 21, no. 5, pp. 699-707, , 2005.
[39] H.-W. Loidl, P.W. Trinder, K. Hammond, S.B. Junaidu, R.G. Morgan, and S.L. Peyton Jones, “Engineering Parallel Symbolic Programs in GPH,” Concurrency: Practice and Experience, vol. 11, pp. 701-752, , 1999.
[40] N. Karonis, B. Toonen, and I. Foster, “MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface,” J. Parallel Distributed Computing, vol. 63, no. 5, pp. 551-563, 2003.
[41] A. Al Zain, “Implementing High-Level Parallelism on Computational Grids,” PhD dissertation, School of Math. and Computer Sciences, Heriot-Watt Univ., papers/ps/ trinder/thesesAlZainAbstract.html , Apr. 2006.
[42] G. Sipos and P. Kacsuk, Executing and Monitoring PVM Programs in Computational Grids with Jini, LNCS 2840, J.Dongarra, D. Laforenza, and S. Orlando, eds. Springer, pp.570-576, http://springerlink. metapress.comopenurl.asp? genre=article&issn=0302-9743&volume=2840&spage=570 , 2003.
[43] P. Trinder, R. Pointon, and H.-W. Loidl, “Towards Runtime System Level Fault Tolerance for a Distributed Functional Language,” Proc. Second Scottish Functional Programming Workshop (SFP '00), vol. 2, pp. 103-113, July 2000.

Index Terms:
Concurrent, distributed, and parallel languages, Grid Computing, Functional Languages
Abdallah D. Al Zain, Phil W. Trinder, Greg J. Michaelson, Hans-Wolfgang Loidl, "Evaluating a High-Level Parallel Language (GpH) for Computational GRIDs," IEEE Transactions on Parallel and Distributed Systems, vol. 19, no. 2, pp. 219-233, Feb. 2008, doi:10.1109/TPDS.2007.70728
Usage of this product signifies your acceptance of the Terms of Use.