Subscribe

Issue No.02 - February (2008 vol.19)

pp: 219-233

ABSTRACT

Computational Grids potentially offer low cost, readily available, and large-scale high-performance platforms. For the parallel execution of programs, however, computational GRIDs pose serious challenges: they are heterogeneous, and have hierarchical and often shared interconnects, with high and variable latencies between clusters. This paper investigates whether a programming language with high-level parallel coordination and a Distributed Shared Memory model (DSM) can deliver good, and scalable, performance on a range of computational GRID configurations. The high-level language, Glasgow parallel Haskell (GpH), abstracts over the architectural complexities of the computational GRID, and we have developed GRID-GUM2, a sophisticated grid-specific implementation of GpH, to produce the first high-level DSM parallel language implementation for computational GRIDs. We report a systematic performance evaluation of GRIDGUM2 on combinations of high/low and homo/hetero-geneous computational GRIDs.We measure the performance of a small set of kernel parallel programs representing a variety of application areas, two parallel paradigms, and ranges of communication degree and parallel irregularity. We investigate GRID-GUM2's performance scalability on medium-scale heterogeneous and high-latency computational GRIDs, and analyse the performance with respect to the program characteristics of

INDEX TERMS

Concurrent, distributed, and parallel languages, Grid Computing, Functional Languages

CITATION

Phil W. Trinder, Greg J. Michaelson, Abdallah D. Al Zain, "Evaluating a High-Level Parallel Language (GpH) for Computational GRIDs",

*IEEE Transactions on Parallel & Distributed Systems*, vol.19, no. 2, pp. 219-233, February 2008, doi:10.1109/TPDS.2007.70728REFERENCES

- [1] I. Foster and C. Kesselman, “Computational Grids,”
The Grid: Blueprint for a Future Computing Infrastructure, 1998.- [2] J. Basney and M. Livny, “Deploying a High Throughput Computing Cluster,”
High Performance Cluster Computing, vol. 1, Prentice Hall, 1999.- [3] S. Zhou, X. Zheng, J. Wang, and P. Delisle, “Utopia: A Load Sharing Facility for Large, Heterogeneous Distributed Computer Systems,”
Software—Practice and Experience, vol. 23, no. 12, pp.1305-1336, 1993.- [4] “MPI: A Message Passing Interface Standard,”
Int'l J. Supercomputer Application, vol. 8, nos. 3-4, pp. 165-414, 1994.- [5] M. Alt, H. Bischof, and S. Gorlatch, “Program Development for Computational Grids Using Skeletons and Performance Prediction,”
Proc. Third Int'l Workshop Constructive Methods for Parallel Programming (CMPP '02), June 2002.- [6] P. Trinder, K. Hammond, H.-W. Loidl, and S. Peyton Jones, “${\rm Algorithm} + {\rm Strategy} = {\rm Parallelism}$ ,”
J. Functional Programming, vol. 8, no. 1, pp. 23-60, http://www.macs.hw.ac.uk/~dsg/gph/papers/ psstrategies.ps.gz, Jan. 1998.- [7] H.-W. Loidl, F. Rubio Diez, N. Scaife, K. Hammond, U. Klusik, R. Loogen, G. Michaelson, S. Horiguchi, R. Pena Mari, S. Priebe, A. Rebon Portillo, and P. Trinder, “Comparing Parallel Functional Languages: Programming and Performance,”
Higher-Order and Symbolic Computation, vol. 16, no. 3, pp. 203-251, 2003.- [8] A. Al Zain, P. Trinder, H.-W. Loidl, and G. Michaelson, “Managing Heterogeneity in a Grid Parallel Haskell,”
J. Scalable Computing: Practice and Experience, vol. 6, no. 4, 2006.- [9] R. Loogen, “Programming Language Constructs,”
Research Directions in Parallel Functional Programming, K. Hammond and G.Michaelson, eds. Springer-Verlag, pp. 63-91, 1999.- [10] A. Geist, A. Beguelin, J. Dongerra, W. Jiang, R. Manchek, and V. Sunderam,
PVM: Parallel Virtual Machine. MIT Press, 1994.- [12] G. Michaelson, N. Scaife, P. Bristow, and P. King, “Nested Algorithmic Skeletons from Higher Order Functions,”
Parallel Algorithms and Applications, vol. 16, pp. 181-206, 2001.- [13] P. Trinder, K. Hammond, J. Mattson Jr., A. Partridge, and S. Peyton Jones, “GUM: A Portable Parallel Implementation of Haskell,”
Proc. ACM Conf. Programming Languages Design and Implementation (PLDI '96), pp. 79-88, http://www.macs.hw. ac.uk/~dsg/gph/papers/ psgum.ps.gz, May 1996.- [14] S. Breitinger, R. Loogen, Y. Ortega Malln, and R. Peña Marí, “Eden: The Paradise of Functional Concurrent Programming,”
Proc. European Conf. Parallel Processing (EuroPar '96), pp. 710-713, 1996.- [15]
The Grid: Blueprint for a New Computing Infrastructure, I. Foster and C. Kesselman, eds., Morgan Kaufmann, 1999.- [16] Globus, http://www.globus.orgtoolkit/, 2005.
- [18] F. Berman, G. Fox, and T. Hey, “The Grid: Past, Present, Future,”
Grid Computing: Making the Global Infrastructure a Reality, F.Berman, G. Fox, and A. Hey, eds., John Wiley & Sons, pp. 9-50, 2003.- [19] D. Jackson, “Advanced Scheduling of Linux Clusters Using Maui,”
Proc. Usenix Ann. Technical Conf. (Usenix '99), 1999.- [20] E. Smirni and E. Rosti, “Modelling Speedup of SPMD Applications on the Intel Paragon: A Case Study,”
Proc. Int'l Conf. and Exhibition High-Performance Computing and Networks, Languages and Computer Architecture (HPCN '95), 1995.- [23] B.-Y. Evan Chang, K. Crary, M. DeLap, R. Harper, J. Liszka, T. Murphy VII, and F. Pfenning, “Trustless Grid Computing in ConCert,”
Proc. Third Int'l Workshop Grid Computing (GRID '02), 2002.- [25] T. Murphy VII, K. Crary, and R. Harper, “Distributed Control Flow with Classical Modal Logic,”
Proc. 19th Int'l Workshop Computer Science Logic (CSL '05), pp. 51-69, July 2005.- [27] Distributed Shared Memory Home Pages, http://www.ics.uci. edu/javiddsm.html/, 2006.
- [28] C. Morin, P. Gallard, R. Lottiaux, and G. Valle, “Design and Implementations of NINF: Towards a Global Computing Infrastructure,”
Future Generation Computer Systems, vol. 20, no. 2, 2004.- [31] M. Aldinucci, M. Coppola, M. Danelutto, M. Vanneschi, and C. Zoccolo, “ASSIST as a Research Framework for High-Performance Grid Programming Environments,”
Grid Computing: Software Environments and Tools, J.C. Cunha and O.F. Rana, eds., Springer, Jan. 2006.- [33] M. Aldinucci, M. Danelutto, and J. Dünnweber, “Optimization Techniques for Implementing Parallel Sckeletons in Grid Environments,”
Proc. Fourth Int'l Workshop Constructive Methods for Parallel Programming (CMPP '04), July 2004.- [34] M. Aldinucci and M. Danelutto, “Advanced Skeleton Programming Systems,”
Parallel Computing, http://www.di.unipi.it/ aldinucpapers.html , 2006.- [37] J. Dünnweber, M. Alt, and S. Gorlatch, “Apis for Grid Programming Using Higher Order Components,”
Proc. 12th Global Grid Forum (GGF '04), http://pvs.uni-muenster.de/pvs/mitarbeiter/ janadgggf04.html, Sept. 2004.- [41] A. Al Zain, “Implementing High-Level Parallelism on Computational Grids,” PhD dissertation, School of Math. and Computer Sciences, Heriot-Watt Univ., http://pvs.uni-muenster.de/pvs/publikationen/ http://www.macs.hw.ac.uk/~dsg/gph/ papers/ps/cpe-gph.ps.gzhttp://www.macs. hw.ac.uk/ trinder/thesesAlZainAbstract.html , Apr. 2006.
- [42] G. Sipos and P. Kacsuk,
Executing and Monitoring PVM Programs in Computational Grids with Jini, LNCS 2840, J.Dongarra, D. Laforenza, and S. Orlando, eds. Springer, pp.570-576, http://springerlink. metapress.comopenurl.asp? genre=article&issn=0302-9743&volume=2840&spage=570 , 2003.- [43] P. Trinder, R. Pointon, and H.-W. Loidl, “Towards Runtime System Level Fault Tolerance for a Distributed Functional Language,”
Proc. Second Scottish Functional Programming Workshop (SFP '00), vol. 2, pp. 103-113, July 2000. |