The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.11 - November (2009 vol.42)
pp: 50-60
Michael Wilde , University of Chicago and Argonne National Laboratory
Ian Foster , University of Chicago and Argonne National Laboratory
Kamil Iskra , University of Chicago and Argonne National Laboratory
Pete Beckman , University of Chicago and Argonne National Laboratory
Zhao Zhang , University of Chicago
Allan Espinosa , University of Chicago
Mihael Hategan , University of Chicago
Ben Clifford , University of Chicago
Ioan Raicu , Northwestern University
ABSTRACT
Scripting accelerates and simplifies the composition of existing codes to form more powerful applications. Parallel scripting extends this technique to allow for the rapid development of highly parallel applications that can run efficiently on platforms ranging from multicore workstations to petascale supercomputers.
INDEX TERMS
Parallel scripting, Scientific computing, Distributed computing, Extreme-scale computing
CITATION
Michael Wilde, Ian Foster, Kamil Iskra, Pete Beckman, Zhao Zhang, Allan Espinosa, Mihael Hategan, Ben Clifford, Ioan Raicu, "Parallel Scripting for Applications at the Petascale and Beyond", Computer, vol.42, no. 11, pp. 50-60, November 2009, doi:10.1109/MC.2009.365
REFERENCES
1. J. Ousterhout, "Scripting: Higher-Level Programming for the 21st Century," Computer, Mar. 1998, pp. 23-30.
2. Y. Zhao et al., "Swift: Fast, Reliable, Loosely Coupled Parallel Computation," Proc. 2007 IEEE Congress on Services, IEEE Press, 2007, pp. 199-206.
3. Y. Zhao et al., "A Notation and System for Expressing and Executing Cleanly Typed Workflows on Messy Scientific Data," ACM SIGMOD Record, Sept. 2005, pp. 37-43.
4. G. Hocky et al., Toward Petascale ab initio Protein Folding through Parallel Scripting, tech. report ANL/MCS-P1645-0609, Argonne National Laboratory, 2009.
5. J. De Bartolo et al., "Mimicking the Folding Pathway to Improve Homology-Free Protein Structure Prediction," Proc. National Academy of Sciences,10 Mar. 2009, pp. 3734-3739.
6. Z. Zhang et al., "Design and Evaluation of a Collective I/O Model for Loosely-Coupled Petascale Programming," Proc. 2008 IEEE Workshop Many-Task Computing on Grids and Supercomputers ( MTAGS 08), IEEE Press, 2008, pp. 1-10.
7. I. Raicu et al., "Toward Loosely Coupled Programming on Petascale Systems," article no. 22, Proc. 2008 IEEE/ACM Conf. Supercomputing (SC 08), IEEE Press, 2008.
8. S. Kenny et al., "Parallel Workflows for Data-Driven Structural Equation Modeling in Functional Neuroimaging," Frontiers in Neuroinformatics, Nov. 2009.
9. A. Fedorov et al., Non-Rigid Registration for Image-Guided Neurosurgery on the TeraGrid: A Case Study, tech. report WM-CS-2009-05, Dept. of Computer Science, College of William and Mary, 2009.
10. J. Dean and S. Ghemawat, "MapReduce: Simplified Data Processing on Large Clusters," Comm. ACM, Jan. 2008, pp. 107-113.
11. Y. Gu and R.L. Grossman, "Sector and Sphere: The Design and Implementation of a High Performance Data Cloud," Philosophical Trans. Royal Society A,28 June 2009, pp. 2429-2445.
12. M. Isard et al., "Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks," ACM SIGOPS Operating System Rev., June 2007, pp. 59-72.
13. D. Abramson et al., "Parameter Space Exploration Using Scientific Workflows," Computational Science—ICCS 2009, LNCS 5544, Springer, 2009, pp. 104-113.
14. P. Balaji et al., "MPI on a Million Processors," Proc. 2009 European PVM/MPI Users' Group Conf. (EuroPVM/MPI 09), CSC-IT Center for Science, 2009.
15. D. Thain and M. Livny, "Building Reliable Clients and Services," The Grid: Blueprint for a New Computing Infrastructure, I. Foster, and C. Kesselman eds., Morgan Kaufmann, 2005, pp. 285-318.
16. B. Chamberlain, D. Callahan, and H. Zima, "Parallel Programmability and the Chapel Language," Int'l J. High-Performance Computing Applications, Aug. 2007, pp. 291-312.
8 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool