NEWS


Computing Now Exclusive Content — October 2009

News Archive

July 2012

Gig.U Project Aims for an Ultrafast US Internet

June 2012

Bringing Location and Navigation Technology Indoors

May 2012

Plans Under Way for Roaming between Cellular and Wi-Fi Networks

Encryption System Flaw Threatens Internet Security

April 2012

For Business Intelligence, the Trend Is Location, Location, Location

Corpus Linguistics Keep Up-to-Date with Language

March 2012

Are Tomorrow's Firewalls Finally Here Today?

February 2012

Spatial Humanities Brings History to Life

December 2011

Could Hackers Take Your Car for a Ride?

November 2011

What to Do about Supercookies?

October 2011

Lights, Camera, Virtual Moviemaking

September 2011

Revolutionizing Wall Street with News Analytics

August 2011

Growing Network-Encryption Use Puts Systems at Risk

New Project Could Promote Semantic Web

July 2011

FBI Employs New Botnet Eradication Tactics

Google and Twitter "Like" Social Indexing

June 2011

Computing Commodities Market in the Cloud

May 2011

Intel Chips Step up to 3D

Apple Programming Error Raises Privacy Concerns

Thunderbolt Promises Lightning Speed

April 2011

Industrial Control Systems Face More Security Challenges

Microsoft Effort Takes Down Massive Botnet

March 2011

IP Addresses Getting Security Upgrade

February 2011

Studios Agree on DRM Infrastructure

January 2011

New Web Protocol Promises to Reduce Browser Latency

To Be or NAT to Be?

December 2010

Intel Gets inside the Helmet

Tuning Body-to-Body Networks with RF Modeling

November 2010

New Wi-Fi Spec Simplifies Connectivity

Expanded Top-Level Domains Could Spur Internet Real Estate Boom

October 2010

New Weapon in War on Botnets

September 2010

Content-Centered Internet Architecture Gets a Boost

Gesturing Going Mainstream

August 2010

Is Context-Aware Computing Ready for the Limelight?

Flexible Routing in the Cloud

Signal Congestion Rejuvenates Interest in Cell Paging-Channel Protocol

July 2010

New Protocol Improves Interaction among Networked Devices and Applications

Security for Domain Name System Takes a Big Step Forward

The ROADM to Smarter Optical Networking

Distributed Cache Goes Mainstream

June 2010

New Application Protects Mobile-Phone Passwords

WiGig Alliance Reveals Ultrafast Wireless Specification

Cognitive Radio Adds Intelligence to Wireless Technology

May 2010

New Product Uses Light Connections in Blade Server

April 2010

Browser Fingerprints Threaten Privacy

New Animation Technique Uses Motion Frequencies to Shake Trees

March 2010

Researchers Take Promising Approach to Chemical Computing

Screen-Capture Programming: What You See is What You Script

Research Project Sends Data Wirelessly at High Speeds via Light

February 2010

Faster Testing for Complex Software Systems

IEEE 802.1Qbg/h to Simplify Data Center Virtual LAN Management

Distributed Data-Analysis Approach Gains Popularity

Twitter Tweak Helps Haiti Relief Effort

January 2010

2010 Rings in Some Y2K-like Problems

Infrastructure Sensors Improve Home Monitoring

Internet Search Takes a Semantic Turn

December 2009

Phase-Change Memory Technology Moves toward Mass Production

IBM Crowdsources Translation Software

Digital Ants Promise New Security Paradigm

November 2009

Program Uses Mobile Technology to Help with Crises

More Cores Keep Power Down

White-Space Networking Goes Live

Mobile Web 2.0 Experiences Growing Pains

October 2009

More Spectrum Sought for Body Sensor Networks

Optics for Universal I/O and Speed

High-Performance Computing Adds Virtualization to the Mix

ICANN Accountability Goes Multinational

RFID Tags Chat Their Way to Energy Efficiency

September 2009

Delay-Tolerant Networks in Your Pocket

Flash Cookies Stir Privacy Concerns

Addressing the Challenge of Cloud-Computing Interoperability

Ephemeralizing the Web

August 2009

Bluetooth Speeds Up

Grids Get Closer

DCN Gets Ready for Production

The Sims Meet Science

Sexy Space Threat Comes to Mobile Phones

July 2009

WiGig Alliance Makes Push for HD Specification

New Dilemnas, Same Principles:
Changing Landscape Requires IT Ethics to Go Mainstream

Synthetic DNS Stirs Controversy:
Why Breaking Is a Good Thing

New Approach Fights Microchip Piracy

Technique Makes Strong Encryption Easier to Use

New Adobe Flash Streams Internet Directly to TVs

June 2009

Aging Satellites Spark GPS Concerns

The Changing World of Outsourcing

North American CS Enrollment Rises for First Time in Seven Years

Materials Breakthrough Could Eliminate Bootups

April 2009

Trusted Computing Shapes Self-Encrypting Drives

March 2009

Google, Publishers to Try New Advertising Methods

Siftables Offer New Interaction Model for Serious Games

Hulu Boxed In by Media Conglomerates

February 2009

Chips on Verge of Reaching 32 nm Nodes

Hathaway to Lead Cybersecurity Review

A Match Made in Heaven: Gaming Enters the Cloud

January 2009

Government Support Could Spell Big Year for Open Source

25 Reasons For Better Programming

Web Guide Turns Playstation 3 Consoles into Supercomputing Cluster

Flagbearers for Technology: Contemporary Techniques Showcase US Artifact and European Treasures

December 2008

.Tel TLD Debuts As New Way to Network

Science Exchange

November 2008

The Future is Reconfigurable

High-Performance Computing Adds Virtualization to the Mix

by George Lawton

Experts consider cluster computing — linking groups of commodity, x86-based computers so that they can function like one high-performance machine — to be a way to democratize supercomputing.

The approach has made high-performance computing (HPC) more affordable and easier to implement for small and mid-sized companies than using traditional expensive, complex, single-machine supercomputers.

Virtualization has become a key new trend in cluster computing. 

In traditional server virtualization, a single machine runs multiple operating systems. Businesses could thus put many applications on servers, even if they require different OSs. This lets companies utilize their hardware more efficiently. 

Instead of letting single machines do the tasks of many, scale-up HPC virtualization — also called aggregation — efficiently binds many machines so that they can function as a virtual supercomputer, 

"You can manage [work] with fewer human resources because the virtualization takes care of the lot of the things you used to worry about," said Mike Kahn, managing director of the Clipper Group consultancy.

Vendors are starting to sell HPC virtualization products, and companies are beginning to implement the approach. However, the technology is relatively new and still faces a number of challenges.

History

Control Data Corp., sparked by HPC pioneer Seymour Cray, introduced some of the first supercomputers in the mid 1960s.

Early machines were based on large scalar processors that shared a single memory pool. 

In the early 1990s vendors introduced new architectures based on massively parallel processing, which became the dominant HPC paradigm. 

However, supercomputers were expensive and required more IT expertise than most companies possessed. 

In 1994, NASA researchers clustered commodity components to create their Beowulf HPC machine. 

This turned out to be much less expensive than traditional big-iron machines, explained Bob Quinn, chief technical officer at virtualization vendor 3Leaf Systems.

In cluster computing, each participating computer runs its own operating system. A job-scheduling manager handles systemwide tasks.

Today, two-thirds of supercomputers are built via clusters, noted Steve Conway, HPC research vice president at IDC, a market research firm. 

Driving HPC Virtualization

HPC virtualization offers advantages other than saving money on supercomputing.

The scale-up approach offers better performance than traditional cluster computing, particularly for applications that require large shared memory. With cluster systems, participating computers use their own memory, and thus aren't effective for applications that require shared memory. 

Scale-up HPC virtualization also offers lower cost; and less programming complexity than standard cluster systems. 

Cluster systems have other problems that HPC virtualization addresses.

For example, the cluster infrastructure's installation, management, and I/O requirements are complex because the network manager must configure each machine to work with the entire cluster, said Shai Fultheim, CEO of HPC-virtualization vendor ScaleMP, 

Cluster systems' parallel-computing programming model is also complex.

To maximize their effectiveness, cluster systems require load-balancing and distributed resource management, which must be provided manually by a programmer, rather than automatically by the OS.

Better support in commodity AMD and Intel x86 chips for virtualization has helped drive HPC virtualization.

For example, the new chips have dedicated hardware extensions that provide virtualization of critical subsystems such as the memory and I/O subsystem. This lets the processors connect memory and I/O resources more easily and with less performance overhead.

HPC Virtualization

There are two types of HPC virtualization.

Scale-up Virtualization 

With scale-up HPC-virtualization, a single hypervisor — the software that allocates a host machine's resources to each virtualized operating system or to each program running on a virtualized OS — runs across multiple computers. This aggregates multiple CPUs and memory systems and makes them appear as a single computer. 

The hypervisor more efficiently handles the work that the job scheduling manager performs in traditional cluster systems. 

Scale-up systems also dynamically provision workloads across the virtual supercomputer, which makes it easier to manage.

Moreover, the systems enable the OS to automatically handle the load-balancing and distributed resource management required to maximize cluster systems' effectiveness. Programmers must manually write code to perform this work in traditional cluster systems.

Networking improvements have overcome the low bandwidth and high latency of Ethernet technologies used in early virtualization approaches, which made binding multiple machines into a virtual supercomputer difficult. 

Scale-up virtualization is best for applications that require a lot of memory.

Cray is combining its Cx1 supercomputer hardware with ScaleMP's vSMP software, which would let a single OS run up to 128 x86 cores as a single scale-up virtual HPC. 

3LeafSystem's Distributed Virtual Machine Monitor enables the Red Hat Linux OS to run across multiple x86 servers, which could combine to form a scale-up virtual HPC.

Scale-out Virtualization

In this approach, a VM and hypervisor run on every processor core in each machine in the virtual supercomputer.

A grid-management tool, rather than a single hypervisor as in scale-up HPC virtualization, manages the overall system. 

This enables the system to allocate resources with granularity, said Gary Tyreman, senior vice president for products and alliances at cloud-computing vendor Univa. Thus, the virtual supercomputer can more easily start and stop individual applications to let more important processes run when necessary.

Scale-out virtualization is best for processor-intensive applications.

Dell, Oracle, and Univa operate a grid, using Univa’s UniCloud VM management software, that offers scale-out virtualization services. 

Advantages

With scale-up HPC virtualization, organizations manage a single logical system, rather than each machine in a complex cluster. There is no need for cluster file systems, cluster interconnect issues, application provisioning, and the installation and update of multiple operating systems and applications. However, improved cluster-management tools are starting to address these issues without the need for virtualization. 

HPC virtualization enables more application uptime and less power consumption by making it easier for systems to effectively manage resources and programs.

The approach also lets developers create a simulated large-cluster system for testing and demonstration purposes before allowing it to access a real, full-scale computing resource, said Northwestern University associate professor Peter Dinda. 

Virtual Challenges

HPC environments require ultra high speed message-passing, which many of today's virtualization approaches don't provide because they use lower-bandwidth, higher-latency Ethernet or InfiniBand connections between participating computers.

The need for virtualization systems to run a hypervisor can rob them of some performance, which is at a premium in HPC environments. This is an issue when processes running on multiple VMs must exchange information, said Tyreman.

Looking Ahead

HPC virtualization is best suited for applications that require a minimum amount of communication between nodes, because of lower bandwidth and higher latency within the virtual supercomputer, said Tyreman. 

For the same reason, he added, the approach is more suitable for CPU- and memory-intensive applications — such as the analysis of a gene sequence or massive server log — than for those that are I/O-intensive.

Researchers are discussing adding more instrumentation to HPC virtualization systems to provide a clearer view of where CPU overhead is occurring within the environment. 

According to ScaleMP's Fultheim, HPC virtualization will foster the development of an ecosystem for on-demand cloud-based supercomputing services. 

On the other hand, Gordon Haff, senior analyst with market-research firm Illuminata, said HPC virtualization faces an uncertain future. 

"The big question," he said, "is to what degree this approach simplifies [processes] while still yielding good performance. Whether the technology takes off depends on this."

George Lawton is a freelance technology writer based in Monte Rio, California. Contact him at glawton@glawton.com.