The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - February (2007 vol.8)
pp: 3
Published by the IEEE Computer Society
ABSTRACT
Industry consensus holds that once deployment and management costs are factored in, the typical enterprise spends US$10,000 to $15,000 per x86 server per year on a nonvirtualized computing configuration. Yet, even though IBM pioneered the technology in its System 360 and 370 mainframes in the 1960s and 1970s, the economics of PC-based computing architectures since the 1980s put large-scale virtualization on the back burner. However, as x86-based data centers grow ever larger, corporate IT managers are beginning to turn their attention back to the old idea of virtualization. Essentially, virtualization uses a virtual machine monitor or host called a hypervisor to enable multiple operating system instances to run on a single physical server. The hypervisor can run directly on a given server?s hardware platform, with the guest operating system running on a layer above the hypervisor. It can also run within an operating system, with the guest OS running on the third layer above the hardware.
Virtualization was one of the computing industry's megatrends in 2006, and there's no sign that the wave of enterprise adoption—and its concomitant opportunity for software developers—will slow any time soon.
"I don't see where any of us are going to avoid it," says Paul Sikora, vice president of IT transformation at the University of Pittsburgh Medical Center. "It's just a matter of when we're going to embrace it. I use the gas mileage analogy quite a bit. Those that don't take advantage of virtualization will be getting eight miles per gallon, while those of us who do will be getting 60 to 80. The economics are that compelling."
UPMC is undertaking a system-wide consolidation and virtualization of its IT architecture, and Sikora estimates that, at the very least, the organization has saved US$10 million in the last six months. The total savings could be as high as $18 million.
Sikora's satisfaction with the decision to aggressively virtualize UPMC's architecture is echoed by Simon Crosby, chief technology officer of XenSource, a company that develops and supports the open source http://www.cl.cam.ac.uk/research/srg/netos/xen/downloads.html
"Virtualization is giving Moore's Law back to the customer," Crosby says. "We've been stuck in this weird mode where they're only getting only getting 10 percent of the compute power they've been buying."
Industry consensus holds that once deployment and management costs are factored in, the typical enterprise spends $10,000 to $15,000 per x86 server per year on a nonvirtualized configuration. Yet, even though IBM pioneered the technology in its System 360 and 370 mainframes in the 1960s and 1970s, the economics of PC-based computing architectures since the 1980s put large-scale virtualization on the back burner. However, as x86-based data centers grew ever larger, Crosby says corporate IT managers began turning their attention back to the old idea of virtualization.
"People were waking up to the fact that the data center was full, that power requirements had gone through the roof, and that they could not afford to continue to grow and scale out infrastructure," Crosby says. As power requirements grow, so do cooling requirements—even more rapidly, he says.
"The challenge is, the data center infrastructure is out of date and that presents problems."
Is virtualization the solution?
Essentially, virtualization uses a virtual machine monitor or host called a hypervisor to enable multiple operating system instances to run on a single physical server. The hypervisor can run directly on a given server's hardware platform, with the guest operating system running on a layer above the hypervisor. It can also run within an operating system, with the guest OS running on the third layer above the hardware.
Several subclassifications exist within the virtualization realm, but paravirtualization is the predominant industry approach today. This approach makes the virtual server OS aware that it's being virtualized, which allows the hypervisor and the guest OS to collaborate on achieving the fastest, most robust performance possible.
"The fundamental driver is to use the horsepower of the machine more effectively, and in the mass market, virtualization is the only game in town," Crosby says.
The great opportunity for software developers within this "only game in town" might lie in the wide open field to compete against the industry's dominant virtualization technology, EMC's http://www.vmware.com/overview/home.html. One analyst report in mid-2006 estimated VMWare held 55 percent of the virtualization market, while Microsoft held 29 percent and various Xen implementations had just 1 percent each; another firm reported VMWare with the same 55 percent but IBM in second place with 18 percent of the market. VMWare has enjoyed 31 consecutive record quarters and grew total revenues 101 percent year-over-year to $232 million in the fourth quarter of 2006, according to EMC's quarterly earnings report issued 23 January.
Such explosive growth and a "virtual" monopoly is bound to attract competition, and XenSource's Crosby says the time has arrived.
"I'm not saying they have a tough road ahead of them, but the challenge is now on. Industry will not put up with a single-vendor solution anymore."
Bogomil Balkansky, director of product marketing for VMWare, says the company's head start in basic virtualization technology plus a commanding lead in management tools gives it a significant advantage in capturing further market share. He also cites the company's Community Source program as a key element in attracting and retaining a strong VMWare ecosystem. The program gives development partners access to the VMWare source code.
However, the virtualization market is still nascent in many vertical industries, which means that leading-edge users are actually leading their vendors in designing virtualization architectures. UPMC, for instance, is working with some of the healthcare industry's leading vendors, still new to virtualization. The VMWare community program partners are all software vendors, not end users with development savvy. That hasn't affected UPMC's decision to stick with VMWare, Sikora says, but if the Xen model is as successful as other leading open source efforts, it could mean both cost savings and more input from pioneering users.
"Right now, no, we haven't considered Xen," Sikora says. "We'll be looking at it. We grabbed on to two technologies we're aggressively implementing, and taking the savings from those. If we can come back around with a lower cost in the future with a different technology, we would do that."
Xen also gained strength from a deal signed between XenSource and Microsoft in July 2006. The companies agreed to develop interoperability between Xen-enabled systems and Windows Server "Longhorn" virtualization.
"We certainly see the three-player market of Microsoft, Xen, and VMWare," says Gartner Group vice president John Enck. "Xen for Linux or open source markets, Microsoft for Windows, and VMWare as Switzerland—able to do anything on any architecture. There's plenty of room for all three technologies. The question for Xen is how many companies can the hypervisor support? Can we afford to support a dozen variations?"
Basic state of the art
An interesting economic dynamic has begun revealing itself in recent months. As virtualization technologies became more popular, server sales lagged slightly. In late 2006, the Gartner Group consultancy reported that year-to-year quarterly global server sales growth had been cut by roughly a third, from 13.1 percent to 9 percent. Yet two of the industry's leading processor manufacturers, Intel and AMD, have been investing heavily in their processor virtualization-assistance technologies, called Intel VT (for Virtualization Technology) and AMD-V (for AMD Virtualization). By baking the most difficult parts of the x86 instruction set into the chip, these technologies offer the hypervisor community a way to streamline their architecture and come close to native performance.
Alex Vasilevsky, chief technology officer of Virtual Iron, which scrapped developing its own hypervisor to join the Xen effort, says the current generation of server hardware includes CPU virtualization, but memory and I/O virtualization still need to be done in software.
"Doing those things together, we are delivering very close to native performance," Vasilevsky says, "looking at 3 to 6 percent degradation from native. But we think we can do better as hardware gets more mature."
"We absolutely rely on what Intel and AMD do to deliver a lot of the new benefits," XenSource's Crosby says. "There is no way to do a good job in Windows virtualization without that. We require VT for Windows virtualization."
And, as virtualizing data centers becomes more imperative for corporate IT managers, Crosby says the hardware manufacturers' implicit goal—to replace hard-to-virtualize legacy servers with the new optimized versions—will become an easy decision.
Distributed computing: Virtual nirvana
Virtualization pioneers are already thinking beyond server farms and enterprise desktops to deploying large-scale distributed virtual grids. The notion is still in its infancy because, as Virtual Iron's Vasilevsky says, "Right now, grid computing and virtualization are really like two opposite things. If you have an application that needs more than one computer, you run it on the grid. If you have applications that are barely utilizing the computer, then you run them in virtualization because the consolidation use case and scalability don't go hand in hand."
However, the two communities are beginning to collaborate. Grid researchers are already experimenting with virtualization deployments in their configurations, and virtualization in distributed computing was a workshop topic for the first time at November's SC06, the annual supercomputing conference sponsored by the IEEE and the Association for Computing Machinery. Kate Keahey, a scientist working on the Globus Toolkit at Argonne National Laboratory, was program chairman for the workshop and says interest among researchers is gaining momentum.
"Grid computing proved that using remote resources is a viable alternative to just simply buying more resources for the local machine room and thereby created consumer demand for remote cycles," Keahey says. "The question now became, how can a provider cater to users with sometimes dramatically different requirements? And that's the question that virtualization helps answer."
Keahey says the workshop featured participants from both academic departments and industry who provided examples of how virtualization had helped in their work. "In general, the whole thing had a feel of right off the drawing board and into everyday life," she says. "This is very exciting, and you don't see it often."
UPMC's Sikora says combining virtualization and grid principles will become de rigeur for large enterprise data networks. The organization's IT planners are already conceptualizing virtualization architecture across the enterprise, which is western Pennsylvania's largest employer with 43,000 workers, 19 hospitals, and 400 other clinical sites.
The first step, Sikora says, is to consolidate the enterprise's applications on a standard platform, which makes virtualization easier. In turn, he says, virtualization means "you have a smaller footprint in a more robust infrastructure you can use in your secondary sites to back them up. It's not much of a stretch to think about linking two or three or four of them together to form grids and back each other up, and that's what's going to happen in healthcare. Virtualization is one of the foundations."
Conclusion
The benefits from those architectures aren't far off. They've already arrived for UPMC as well as for its primary vendor in the new architecture, IBM. The deal was valued at $400 million. In addition, Amazon.com has already debuted its publicly-accessible utility grid, Elastic Compute Cloud http://aws.amazon.com/ec2 which includes Xen virtualization http://wiki.xensource.com/xenwiki/ElasticHosting.
With virtualization a fundamental piece of such new architectures, it appears that successful virtualization developers are in line to reap a windfall.
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool