This Article 
 Bibliographic References 
 Add to: 
Writing Programs that Run EveryWare on the Computational Grid
October 2001 (vol. 12 no. 10)
pp. 1066-1080

Abstract—The Computational Grid has been proposed for the implementation of high-performance applications using widely dispersed computational resources. The goal of a Computational Grid is to aggregate ensembles of shared, heterogeneous, and distributed resources (potentially controlled by separate organizations) to provide computational “power” to an application program. In this paper, we provide a toolkit for the development of globally deployable Grid applications. The toolkit, called EveryWare, enables an application to draw computational power transparently from the Grid. It consists of a portable set of processes and libraries that can be incorporated into an application so that a wide variety of dynamically changing distributed infrastructures and resources can be used together to achieve supercomputer-like performance. We provide our experiences gained while building the EveryWare toolkit prototype and an explanation of its use in implementing a large-scale Grid application.

[1] H. Abu-Amara and J. Lokre, “Election in Asynchronous Complete Networks with Intermittent Link Failures,” IEEE Trans. Computers, vol. 43, no. 7, pp. 778–788, July 1994.
[2] O. Arndt, B. Freisleben, T. Kielmann, and F. Thilo, “Scheduling Parallel Applications in Networks of Mixed Uniprocessor/Multiprocessor Workstations,” Proc. Int'l Symp. Computer Architecture 11th Conf. Parallel and Distributed Computing, Sept. 1998.
[3] K. Arnold et al., "The Jini Specification," Addison-Wesley, Reading, Mass., 1999.
[4] F. Berman, R. Wolski, S. Figueira, J. Schopf,, and G. Shao, “Application-Level Scheduling on Distributed Heterogeneous Networks,” Proc. Supercomputing, 1996.
[5] “The Bovine RC5-64 Project,” mpi-book.ps proceedings/org/sc96/proceedings/.http:/ /distributed.netrc5/, 1999.
[6] H. Casanova and J. Dongarra, "NetSolve: A Network Server for Solving Computational Science Problems," Proc. Supercomputing '96, IEEE Computer Society Press, Los Alamitos, Calif., 1996.
[7] Concurrent Systems Architecture Group,http:/, 1999.
[8] D. Culler, A. Arpaci-Dusseau, R. Arpaci-Dusseau, B. Chun, S. Lumetta, A. Mainwaring, R. Martin, C. Yoshikawa, and F. Wong, “Parallel Computing on the Berkeley NOW,” Proc. Nineth Joint Symp. Parallel Processing, 1997. Also available athttp://now.CS.Berkeley.EDUPapers2.
[9] L. DeRose, Y. Zhang, and D. Reed, "SvPablo: A Multi-Language Performance Analysis System," Computer Performance Evaluation Modeling Techniques and Tools, Lecture Notes in Computer Science, Vol. 1,469, R. Puigjaner, N. Savino, and B. Serra, eds., Springer-Verlag, New York, 1998, pp. 352-355.
[10] Message Passing Interface Forum, "MPI: A Message-Passing Interface Standard," Technical Report CS-93-214, Univ. of Tennessee, Apr. 1994.
[11] I. Foster and C. Kesselman, “Globus: A Metacomputing Infrastructure Toolkit,” Int'l J. Supercomputer Applications, 1997.
[12] I. Foster and C. Kesselman, The Grid: Blueprint for a New Computing Infrastructure, Morgan Kaufmann, San Francisco, 1999.
[13] I. Foster et al., "A Security Architecture for Computational Grids," Proc. 5th ACM Conf. Computer and Communications Security, ACM Press, New York, 1998.
[14] I. Foster, C. Kesselman, and S. Tuecke, "The Nexus Approach to Integrating Multithreading and Communication," to be published in J. Parallel and Distributed Computing.
[15] H. Garcia-Molina, “Elections in a Distributed Computing System,” IEEE Trans. Computers, vol. 31, no. 1, pp. 49-59, Jan. 1982.
[16] J. Gehring and A. Reinefeld, “MARS—A Framework for Minimising the Job Execution Time in a Metacomputing Environment,” Future Generation Computer Systems, vol. 12, pp. 87-99, 1996.
[17] A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek,, and V. Sunderam,PVM: Parallel Virtual Machine—A Users' Guide and Tutorial for Networked Parallel Computing. The MIT Press, 1994.
[18] J. Gosling and H. McGilton, “The Java Language Environment,” Sun White Paper,, 1996.
[19] A.S. Grimshaw et al., "The Next Logical Step Toward a Nationwide Virtual Computer," Tech. Report TR-94-21, Univ. of Virginia, Charlottesville, Va., 1994.
[20] W. Gropp, E. Lusk, N. Doss, and A. Skjellum, “A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard,” Parallel Computing, vol. 22, no. 6, pp. 789–828, 1996.
[21] T. Haupt, E. Akarsu, G. Fox, and W. Furmanski, “Web Based Metacomputing,” Technical Report SCCS-834, Syracuse Univ. Northeast Parallel Architectures Center, 1999. Also available at 0800abs-0834.html.
[22] “High-Performance Computing Challenge at SC98,”, Nov. 1998.
[23] R. Jones, “Netperf: A Network Performance Monitoring Tool,” http:/, 2001.
[24] A.K. Lenstra and M. Manasse, “Factoring by Electronic Mail,” Proc. Eurocrypt 89, No. 173inLecture Notes in Computer Science, Springer-Verlag, Berlin, 1990.
[25] M.J. Lewis and A.S. Grimshaw, “Using Dynamic Configurability to Support Object-Oriented Programming Languages and Systems in Legion,” Technical Report CS-96-19, Univ. of Virginia, 1996.
[26] J.M.M. Ferris and M. Mesnier, “Neos and Condor: Solving Optimization Problems Over the Internet,” Technical Report ANL/MCS-P708-0398, Argonne National Laboratory, Mar. 1998. index.html.
[27] Microsoft Windows NT, overviewWpGlobal.asp, 1999.
[28] B.P. Miller, M.D. Callaghan, J.M. Cargille, J.K. Hollingsworth, R.B. Irvin, K.L. Karavanic, K. Kunchithapadam, and T. Newhall, “The Paradyn Parallel Performance Measurement Tools,” IEEE Computer, vol. 28, no. 11, Nov. 1995. Also see.
[29] H. Nakada, H. Takagi, S. Matsuoka, U. Nagashima, M. Sato, and S. Sekiguchi, “Utilizing the Metaserver Architecture in the Ninf Global Computing System,” Proc. High-Performance Computing and Networking '98, pp. 607-616, 1998.
[30] “NT SuperCluster,” General/CCntcluster/, 1999.
[31] “OMG,” The Complete Formal/98-07-01: The CORBA/IIOP 2.2 Specification, 1998.
[32] S. Radziszowski, “Small Ramsey Numbers,” Dynamic Survey DS1—Electronic J. Combinatorics, vol. 1, p. 28, 1994.
[33] R. Ribler, J. Vetter, H. Simitci, and D. Reed, “Autopilot: Adaptive Control of Distributed Applications,” Proc. IEEE Int'l High Performance Distributed Computing Symp. (HPDC), Aug. 1999.
[34] N. Spring and R. Wolski, “Application Level Scheduling of Gene Sequence Comparison on Metacomputers,” Proc. 12th ACM Int'l Conf. Supercomputing, July 1998.
[35] Sun Microsystems, “XDR: External Data Representation, 1987,” ARPA Working Group Requests for Comment DDN Network Information Center, SRI Int'l, Menlo Park, Calif., RFC-1014, 1987.
[36] T. Tannenbaum and M. Litzkow, The Condor Distributed Processing System, Dr. Dobbs J., Feb. 1995.
[37] “The Tera MTA,” http:/, 1999.
[38] J.B. Weissman and X. Zhao, “Scheduling Parallel Applications in Distributed Networks,” Cluster Computing, vol. 1, no. 1, pp. 109-118, 1998.
[39] R. Wolski, “Dynamically Forecasting Network Performance Using the Network Weather Service,” J. Cluster Computing, vol. 1, no. 1, pp. 119-132, 1998.
[40] R. Wolski, N.T. Spring, and J. Hayes, “The Network Weather Service: A Distributed Resource Performance Forecasting Service for Metacomputing,” J. Future Generation Computing Systems, 1999.
[41] R. Wolski, N. Spring, and J. Hayes, “Predicting the CPU Availability of Time-Shared Unix Systems on the Computational Grid,” Proc. Eighth IEEE Symp. High Performance Distributed Computing, 1999. Also available at.

Index Terms:
Computational Grid, EveryWare, Ramsey Number search, grid infrastructure, ubiquitous computing, distributed supercomputer.
Rich Wolski, John Brevik, Graziano Obertelli, Neil Spring, Alan Su, "Writing Programs that Run EveryWare on the Computational Grid," IEEE Transactions on Parallel and Distributed Systems, vol. 12, no. 10, pp. 1066-1080, Oct. 2001, doi:10.1109/71.963418
Usage of this product signifies your acceptance of the Terms of Use.