Cloud Computing

George K. Thiruvathukal, Loyola Unversity Chicago
Manish Parashar, Rutgers University

Pages: pp. 8-9

Abstract—The guest editors discuss this special issue on cloud computing, exploring how cloud platforms and abstractions can be effectively used to support real-world science and engineering applications.

Keywords—cloud computing; grids; clusters; HPC; scientific computing

Cloud computing has emerged as a dominant paradigm, widely adopted by enterprises. Clouds provide on-demand access to computing utilities, an abstraction of unlimited computing resources, and support for on-demand scale-up, scale-down, and scale-out. Cloud services are also rapidly joining other infrastructures (for example, grids, clusters, and high-performance computing) as viable platforms for scientific exploration and discovery, as well as education. Thus, it's critical to understand application formulations and usage modes that are meaningful in such a hybrid infrastructure, along with fundamental conceptual and technological challenges and ways that applications can effectively utilize clouds.

This special issue explores how cloud platforms and abstractions, either by themselves or in combination with other platforms, can be effectively used to support real-world science and engineering applications. We specifically sought articles that addressed algorithmic and application formulations, programming models and systems, runtime systems and middleware, end-to-end application workflows, and experiences with real applications.

We received a total of 15 submissions for this special issue, of which we were only able to accept three. Prior to organizing this special issue, Manish Parashar (one of the guest editors for this special issue) had an article submission already under consideration, which underwent independent review. Under ordinary circumstances, it would have made sense to publish it as a regular article. However, in this case, his co-authored article provides a general overview of cloud computing principles, their connections to other cyberinfrastructure, and their role in supporting computational and data-enabled science and engineering. Having this general overview at our disposal affords the opportunity to present this introduction more succinctly.

Each of the three remaining articles detail novel aspects of cloud computing. For example, in “Comparing FutureGrid, Amazon EC2, and Open Science Grid for Scientific Workflows,” Gideon Juve and his colleagues explore the use of various cyberinfrastructure alternatives (computational grids and public or private clouds) to execute scientific workflows, an important class of scientific applications. This article examines the benefits and drawbacks of cloud and grid systems using a case study from the astronomy/astrophysics domain and compares and contrasts the available infrastructures in terms of setup, usability, cost, resource availability, and performance.

Next, Dilip Kumar Krishnappa and his colleagues discuss CloudCast, an application that provides clients with personalized short-term weather forecasts based on their current location using cloud services, in “CloudCast: Cloud Computing for Short-term Weather Forecasts.” CloudCast is often implemented using traditional supercomputing, and generates accurate forecasts (using Amazon EC2) tens of minutes in the future for small areas. The authors examine the feasibility of commercial and research cloud services from a networking and computational perspective, and present exceptionally promising results.

Last, in “Cloud-Based Software Platform For Big Data Analytics in Smart Grids,” Yogesh Simmhan and his colleagues describe the use of cloud computing for smart grid applications, which incorporate pervasive sensors, actuators, and data networks into national power grids—with the focus being on a scalable software platform for the Smart Grid Cyber-Physical System using cloud technologies. The article discusses Dynamic Demand Response $\left ({{\rm{D}}^{\rm{2}} {\rm{R}}} \right)$, a challenge application using the University of Southern California campus's microgrid to perform intelligent demand-side management and relieve peak load. The authors describe the use of clouds to support on-demand provisioning, massive scaling, and manageability, with the ultimate goal of being able to support the Los Angeles power grid (a much larger-scale application).

As guest editors who both have experience with traditional high-performance computing domains, we're convinced that the confluence of traditional HPC (such as parallel supercomputers and grids) and commercial/self-hosted clouds can be a virtuous mix, and will continue to make inroads in scientific computing. The articles in this special issue help to make the case that many traditional HPC applications can truly make effective use of cloud services. At the same time, there are limitations and research challenges ahead.

With the advent of commercial cloud computing, we've come a long way since Bill Joy coined the phrase “the network is the computer” well over two decades ago. To borrow from the song made famous by Frank Sinatra and Count Basie, with cloud computing “the best is yet to come,” but “you ain't seen nothing yet.”