The Community for Technology Leaders
2008 IEEE Fourth International Conference on eScience (2008)
Indianapolis, IN
Dec. 7, 2008 to Dec. 12, 2008
ISBN: 978-1-4244-3380-3
pp: 285-292
ABSTRACT
Compute-intensive scientific applications are heavily reliant on the available quantity of computing resources. The Grid paradigm provides a large scale computing environment for scientific users. However, conventional Grid job submission tools do not provide a high-level job scheduling environment for these users across multiple institutions. For extremely large number of jobs, a more scalable job scheduling framework that can leverage highly distributed clusters and supercomputers is required. In this paper, we propose a high-level job scheduling Web service framework, Swarm. Swarm is developed for scientific applications that must submit massive number of high-throughput jobs or workflows to highly distributed computing clusters. The Swarm service itself is designed to be extensible, lightweight, and easily installable on a desktop or small server. As a Web service, derivative services based on Swarm can be straightforwardly integrated with Web portals and science gateways. This paper provides the motivation for this research, the architecture of the Swarm framework, and a performance evaluation of the system prototype.
INDEX TERMS
high performance computing, Grid computing, Job scheduling
CITATION
Marlon Pierce, Sangmi Lee Pallickara, "SWARM: Scheduling Large-Scale Jobs over the Loosely-Coupled HPC Clusters", 2008 IEEE Fourth International Conference on eScience, vol. 00, no. , pp. 285-292, 2008, doi:10.1109/eScience.2008.64
88 ms
(Ver 3.3 (11022016))