Issue No.06 - June (2013 vol.62)
Published by the IEEE Computer Society
D.R. Avresky , International Research Institute for Autonomic Network Computing (IRIANC), Boston, MA, USA
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TC.2013.95
Cloud computing is defined as a pool of virtualized computer resources. Based on this virtualization, the cloud computing paradigm allows workloads to be deployed and scaled-out quickly through the rapid provisioning of virtual machines or physical machines. A cloud computing platform supports redundant, self-recovering, highly scalable programming models that allow workloads to recover from many inevitable hardware/software failures and monitoring resource use in real time for providing physical and virtual servers, on which the applications can run. A cloud computing platform is more than a collection of computer resources, because it provides a mechanism to manage those resources. In a cloud computing platform, software is migrating from the desktop into the "clouds" of the Internet, promising users anytime, anywhere access to their programs and data. The editor-in-chief provides an overview of the technical articles and features presented in this issue.
Cloud Computing is defined as a pool of virtualized computer resources. Based on this virtualization, the Cloud Computing paradigm allows workloads to be deployed and scaled-out quickly through the rapid provisioning of virtual machines or physical machines. A Cloud Computing platform supports redundant, self-recovering, highly scalable programming models that allow workloads to recover from many inevitable hardware/software failures and monitoring resource use in real time for providing physical and virtual servers, on which the applications can run. A Cloud Computing platform is more than a collection of computer resources, because it provides a mechanism to manage those resources. In a Cloud Computing platform, software is migrating from the desktop into the “clouds” of the Internet, promising users anytime, anywhere access to their programs and data.
In the paper “On the Optimal Allocation of Virtual Resources in Cloud Computing Networks,” by C. Papagianni, A. Leivadeas, S. Papavassiliou, V. Maglaris, C. Cervelló-Pastor, and Á. Monje, formulate the Virtual Network Embedding (VNE) problem in networked cloud environment.
Following cloud service paradigm, the paper aims to: 1) extend the pool of shared resources to a layer 2/3 network topology, including heterogeneous network infrastructure, possibly across multiple domains; 2) provide a generic formulation for the resource mapping problem at hand capable of taking into consideration Quality of Service (QoS) requirements; 3) support QoS provisioning of cloud Infrastructure as a Service (IaaS); 4) design and implementation of an experimentation simulation environment that allows a flexible and structured evaluation of the performance and efficiency of the proposed approach; and finally 5) provide a proof of concept, of the operational efficiency of the proposed approach, via a prototype implementation of the framework on an FI experimentation platform - FEDERICA.
In the paper “Workload-Based Software Rejuvenation in Cloud Systems,” by D. Bruneo, S. Distefano, F. Longo, A. Puliafito, and M. Scarpa, the main goal of time-based rejuvenation models is to find an optimal rejuvenation timer that allows to minimize some objective functions. Usually, the timer is set at a system start-up and it does not change with respect to the system dynamics (e.g., system workload variations). The authors refer to such kind of approach as fixed timer policy. Another contribution of the present work is the specification of a time-based policy adapting the rejuvenation timer to the Virtual Machine Monitor (VMM) conditions, taking into account its workload and ageing (variable timer policy). The effectiveness of the proposed modeling technique is demonstrated through a numerical example, based on a case study, taken from the literature. It shows how the proposed variable timer policy outperforms the fixed one in terms of improved system availability also varying the way failure rates are affected by the workload. It can be noted that the authors present an analytic technique that allows to represent any generic failure and repair distributions, adequately modeling changes in the workload through the conservation of a reliability principle.
The paper “Integrated Approach to Data Center Power Management,” by L. Ganesh, H. Weatherspoon, T. Marian, and K. Birman, focuses on a key aspect of data center operational efficiency-energy management.
This paper takes an integrated approach to data center energy management to simultaneously address idle resource energy consumption, and support-infrastructure energy consumption. The authors argue for a power management approach that powers down racks or even entire containerized data centers, when idle, thus powering down not only servers, but also their associated power distribution, backup, networking, and cooling equipment. The evaluation shows that shifting to this model combines the energy savings of the power-proportional as well as the green data center approaches, while not impacting performance. They also show that this shift is practical today at very low deployment cost, and that current data center trends strongly enable it.
The authors believe that an increasingly likely vision of the future of online services is one where a few infrastructure providers compete to host the world's services and data. They show that for an SaaS provider, existing data replication and placement policies fit the proposed large Power Cycle Unit (PCU ) model. Further, the authors show that an SaaS provider could provide storage options up to 16.5 percent cheaper by adopting rack-based power management, and tuning the number of replicas kept live. Finally, they examine another point in the design space-container farms. The authors show that, in this scenario, using entire containers as the PCU is practical, and leads to no performance penalty over node-based power management.
D.R. Avresky is with the International Research Institute for Autonomic Network Computing (IRIANC), Boston, MA, USA. E-mail: firstname.lastname@example.org.
For information on obtaining reprints of this paper, please send e-mail to: email@example.com.
D.R. Avresky is an associate editor of IEEE Transactions on computers, for the areas: autonomic network computing, cloud computing, interconnection networks, multicore systems, dependability of computing systems. In total, Dr. Avresky has published more than 126 papers, including IEEE Transactions journals papers, in the area of: network (control, routing, fault tolerance, dynamic reconfiguration, adaptive routing, self healing, performance analysis, virtual/overlay networks, middleware), software and protocol verification, parallel computers, functional programming, testing and diagnostics, wireless sensors networks, . Dr. Avresky has supervised 13 PhD students on above mentioned topics. He has been funded by the US National Science Foundation, Hewlett Packard, Compaq, Tandem, NASA, Motorola Research Labs, Bell Labs, Akamai Tech. Inc. and other institutions in USA. Dr. Avresky is currently a President of the International Research Institute on Autonomic Network Computing (IRIANC), Boston, MA, USA/Munich, Germany. He has been holding academic positions at different universities in USA and Europe and has created research labs, in area of network computing and fault-tolerant and high-performance parallel computers, in these academic institutions. Dr. Avresky has been a guest-editor/guest coeditor of six IEEE journals: IEEE Transactions on Computers, special section on optimizing the cloud, March 2013; IEEE Transactions on Computers, special section on autonomic network computing, November 2009; IEEE Transactions on Computers, special issue on embedded fault-tolerant systems, February 2002; IEEE Transactions on Parallel and Distributed Systems, special issue on dependable network computing, February 2001; IEEE Micro, special issue on embedded fault-tolerant systems, September/October 2001; IEEE Micro, USA, special issue on embedded fault-tolerant systems, September/October 1998. In addition, seven books and five book chapters have been published: Cloud Computing, Springer Verlag, November 2010, Editors, D.A. Avresky, M. Diaz, B. Ciciani, A. Bode and E. Dekel; Dependable Network Computing, Kluwer Academic Publishers, 2000; Fault-Tolerant Parallel and Distributed Systems, Kluwer Academic Publishers, 1998; Fault-Tolerant Parallel and Distributed Systems, IEEE Computer Society Press, 1995; Hardware and Software Fault-Tolerance in Parallel Computing Systems Simon&Schuster International Group, 1992; Fault-Tolerant Microprocessor Systems, Jusauthor, 1984; Diagnostics and Reliability of Computers, Jusautor, 1979. Dr. Avresky is a founder and a Program/Steering Committee chair of The IEEE International Symposium on Network Computing and Applications (NCA*), Cambridge, MA, during (2002-2013) ; Steering Committee chair of the IEEE Int'l Symposium on Network Cloud Computing & Applications (NCCA), 2011-2013, Toulouse, France, Imperial College London, United Kingdom; general chair of IEEE Cluster Symposium, September, 2005, Burlington, MA, USA; Steering Committee and general chair of Int'l Symposium Cloud Computing, Munich, Germany, October, 2009. Founder and a Steering Committee chair of the Annual IEEE International Workshop FTPDS (now DPDNS), held in conjunction with IEEE IPDPS, during 1996-2013; program chair of the IEEE Workshop on Embedded-Fault Tolerant Systems (EFTS), (1996, Dallas), (1998, Boston) and (2000, Washington, DC.) Dr. Avresky served as a reviewer of IEEE Transactions journals. He has been a member of the Program Committee and a reviewer for numerous IEEE conferences. He is a senior member of the IEEE Computer Society.