In 2014, virtualization surpassed 50 percent of all server workloads, according to Gartner, which believes it will reach 86 percent in 2016. That’s phenomenal for a relatively young technology. But due to other shifts in technology and the demands they produce, it’s a framework whose time has come.
The ubiquity of mobile devices and the rapid adoption of cloud-based services has led to astronomical quantities of data being produced. This in turn has led to the rise of the virtual machine (VM). In virtual environments, a software program reproduces the functions of physical hardware, which in turn creates new levels of hardware flexibility, utilization, and cost savings.
The growing popularity of virtualization enables organizations to run several applications simultaneously, creating a need for heightened levels of storage. This sudden spike in demand has warranted a novel approach to storage – specifically, a solution that offers effective management, efficiency, and interoperability.
Benefits of Virtualization
The benefits of server virtualization to enterprises are numerous — namely, cost savings and flexibility. Virtualization allows organizations to efficiently utilize data center hardware. In a typical data center setup, physical servers are simply idling for a significant percentage of the time. By implementing virtual servers within the hardware, the organization can optimize the use of its central processing units (CPUs), allowing enterprises to take advantage of virtualization’s benefits and cost efficiencies.
Enterprises also have more options with virtualization. They have the convenience of reducing the need for physical machines within their infrastructure, enabling an easy transition to VMs. For example, if an organization decides to change hardware, the data center administrator could easily migrate the virtual server to the newer, more advanced hardware, achieving improved performance for a reduced cost.
Before virtual servers, administrators had to install the new server and then reinstall and migrate all the data stored on the old server, a complex and time-consuming approach. It is remarkably simpler to migrate a VM over a physical machine.
Demand for virtualization is spiking in data centers that host a large number of servers – somewhere in the range of 20-50 or above. By embracing virtualization, organizations can achieve significant levels of cost savings and flexibility.
Moreover, servers are far easier to manage once virtualized. The sheer challenge of physically managing a large number of servers can become arduous for data center staff. Virtualization empowers administrators to run the same number of servers on fewer physical machines, simplifying data center management.
Though virtualization has undeniable pluses, it is straining traditional data center infrastructure and storage devices. The original VM models used local storage found within the physical server, making it impossible for administrators to migrate a virtual machine from one physical server to an upgraded one with a more powerful CPU.
The introduction of shared storage—either network-attached storage (NAS) or a storage area network (SAN)—to the VM hosts solved this problem, introducing the ability to stack on several virtual machines. This eventually evolved to the current server virtualization scenario, where all physical servers and VMs are connected to a unified storage infrastructure.
However, this scenario creates its own problem: data congestion. Since data is moving through one access point, it gets bottlenecked during periods of excessive demand. Considering that the amount of VMs and data are only expected to increase, it is obvious that storage architecture must be improved to keep up with the pace of data growth.
The Case for Distribution
The early adopters of virtualized servers already experience the problems associated with single-entry points and are working towards mitigating its impact. Fortunately, there is hope for organizations looking to maximize the benefits of virtualization. These organizations are able to prevent data congestion created by traditional scale-out environments by eliminating the single point of entry.
Current NAS or SAN storage solutions unavoidably have a single access point that regulates the flow of data, leading to congestion during heightened demand. Alternatively, organizations should opt for a solution that has several entry points and distributes data uniformly across all servers. Even if several users access the system at any given time, it will retain optimal performance while reducing lag time.
This is currently the most straightforward method to address the problem. However, the next generation of storage infrastructure presents novel alternatives.
Rethinking Compute and Storage
There is a new storage architecture strategy designed to overcome the storage challenges that scale-out virtual environments encounter. This new approach involves running VMs within the storage node themselves (or running the storage inside the VM hosts) – consequently turning it into a compute node.
This in effect creates a “flat earth” storage environment. For example, if an organization uses shared storage in a SAN, normally, the VM hosts from the highest storage layer, ultimately reconstructing it into a unified, single-entry storage system. In order to solve the data bottleneck issues associated with this approach, many organizations are moving away from the traditional two-layer architecture that has both VMs and storage running on the same layer.
Leading the Way Forward
As with any new technology, there were initial snags when virtualization was under development. However, its current widespread adoption demonstrates that those snags are a thing of the past, and that this technology is worth its salt.
Flexibility, efficiency, and cost savings are among the benefits of enterprises that have virtualized their infrastructure. Resellers must be prepared with the information and solutions necessary to guide their customers who want to create or scale a virtualized storage environment.