Pages: pp. 20-22
A US researcher has developed a robot that can adapt to changes to either its environment or its own structure, even if it is damaged.
University of Vermont assistant professor Josh Bongard said his Starfish robot doesn't regenerate a new limb if one is destroyed, like its biological counterpart, but rather adapts its behavior in response to damage and learns to walk in new ways.
"The approach is unique in that the robot adapts by creating a computer simulation of itself: how many legs it has, how those legs are arranged, whether the ground over which it's moving is flat or rocky," he explained. "By creating such a simulation, it can mentally rehearse ways of walking before executing them."
"This is particularly useful in dangerous situations," he said. "If the robot is perched on the edge of a cliff, it shouldn't simply try to walk forward and see what happens. It should build an understanding of its immediate environment first."
Figure The Starfish robot uses sensors and simulation software to adapt to changes to either its environment or its own structure, such as the loss of a limb. Starfish rocks back and forth to enable sensors to determine the nature of the surrounding terrain or the robot's current physical structure. It then simulates possible movements to determine which will best fulfill its mission.
The technology could be used in disasters or dangerous situations.
This work is also important to the creation of self-configuring robots that could change locomotion, for example from crawling to walking, based on various conditions, said Bongard, whose work is funded largely by the US National Science Foundation.
Working with Cornell University's Computational Synthesis Laboratory (CCSL), researchers used 3D printing technology to fabricate the battery-powered robot's plastic body.
Starfish contains a small PC-104 computer with a Pentium 166-MHz processor, 64 megabits of RAM, and 512 megabits of compact flash memory.
It gets a sense of its environment and physical structure by rocking back and forth. This activates joint-angle sensors, which determine how the robot is tilting in response to the surrounding terrain or missing parts.
The robot collects data on its flash card and uploads the information to an external PC for processing by the simulation software. The external PC sends the robot acquired information about its actions and behavior.
Other hardware includes a Diamond DMM-32X-AT data-acquisition board, used for collecting joint-angle sensor data; and a Pontech SV203 servo control board, which drives Starfish's hinge motors.
The researchers program the robot with basic information about its design, such as the mass and shape of its parts. Starfish then builds a virtual model of itself, using Open Dynamics Engine software, which integrates the equations of motion for a connected set of physical objects, such as those that make up the robot.
Starfish can thus consider the physical repercussions of a given set of torques applied over time, such as whether a specific motor program—an application that determines how the robot will move—will cause it to go forward or simply shake in place.
According to Bongard, his team layered an optimization technique on top of the simulator that lets the robot determine which of its large set of motor programs will best fulfill its mission.
"[Other] lines of inquiry that I'm pursuing involve looking at ways to get teams of robots to collectively simulate their surroundings, and then share those simulations," he added. "In this way, robots could work together to solve tasks that are beyond any one of them."
Intel is leading an effort to create a new, much faster version of the Universal Serial Bus standard used for connecting computers to peripherals and other devices. Proponents say faster connections are necessary to transfer the large amounts of data-intensive digital content found today in a reasonable amount of time between computers and devices.
To develop SuperSpeed USB technology, Intel has formed the USB 3.0 Promoters Group with members Hewlett-Packard, Microsoft, NEC, NXP Semiconductors, and Texas Instruments.
Once they design the standard, the USB Implementers Forum ( www.usb.org) will handle tasks such as product-compliance testing and certification, as well as trademark protection and user education, said Jeff Ravencraft, Intel technology strategist and Promoters Group chair.
Proponents plan for USB 3.0 to provide 10 times the bandwidth of USB 2.0, which would raise speeds from 480 megabits per second to 4.8 Gbps.
The goal is to reduce the wait that consumers currently experience moving rich digital video, audio, and other content between their PCs and portable devices, explained Ravencraft.
USB 3.0 users could transfer a 27-gigabyte high-definition movie to a portable media player in 70 seconds, as opposed to the 14 minutes required with USB 2.0.
The new technology might even allow a device to send high-definition signals to a TV, noted analyst Carl D. Howe, a principal with consultancy Blackfriars Communications.
USB 3.0 would also work with USB drives, camcorders, external hard drives, MP3 players, portable telephones, and other flash-capable portable devices, Ravencraft said.
It will be backward compatible with earlier USB versions, using the same connectors and programming models, and will support multiple communications technologies, he added.
USB 3.0 will improve performance in part by using two channels to separate data transmissions and acknowledgements, thereby letting devices send and receive simultaneously, rather than serially.
Unlike USB 2.0, the new technology won't continuously poll devices to find out if they have new data to send, which should reduce power consumption.
USB 3.0 will also add quality-of-service capabilities to guarantee specified levels of resources to high-priority traffic.
Ravencraft declined to provide more detail, explaining that "the specification is still being defined."
Intel expects the final standard to be finished by mid-2008 and to appear in peripherals and devices in 2009 or 2010.
USB's main competitors among external bus technologies are Fire-Wire (IEEE 1394), which has a new standard that runs at 3.2 Gbps; and eSATA (external serial advanced technology attachment), which offers up to 3 Gbps now and a planned 6 Gbps by mid-2008.
Said Howe, "My guess is that video recorders and the like may go with FireWire, which can run … without imposing the big computational load that USB does. And there are other systems, like wireless USB, that promise similar speeds but without the wires."
Several companies are cooperating to develop virtualization standards. This would enable products from virtualization firms to work together, an important development for both vendors and users of the increasingly popular technology.
Virtualization software enables a single computer to run multiple operating systems simultaneously, via the use of virtual machines (VMs). The products let companies use a single server for multiple tasks that would normally have to run on multiple servers, each working with a different OS. This reduces the number of servers a company needs, thereby saving money and using resources efficiently.
Currently, different vendors' VMs use their own formats and won't necessarily run on other virtualization vendors' platforms, explained Simon Crosby, chief technology officer with virtualization vendor XenSource. Different vendors' products also use separate formats for storing VMs on a user's hard drive.
To overcome this, virtualization vendors Microsoft, VMware, and XenSource, along with server makers Dell, Hewlett-Packard, and IBM and the Distributed Management Task Force are working on the Open Virtual Machine Format. The DMTF is an international industry organization that develops management standards.
OVF defines an XML wrapper that encapsulates multiple virtual machines and provides a common interface so that the VMs can run on any virtualization system that supports OVF. A vendor could deliver an OVF-formatted VM to customers who could then deploy it on the virtualization platform of their choice. For example, a VM packaged for a VMware system could run on a XenSource system.
OVF utilizes existing tools to combine multiple virtual machines with the XML wrapper, which gives the user's virtualization platform a package containing all required installation and configuration parameters for the VMs. This lets the platforms run any OVF-enabled VM.
Vendors are cooperating in this effort because users demand interoperability among different products for running multiple VMs on one computer, noted Laura DiDio, vice president and research fellow for Yankee Group, a market research firm. Moreover, interoperability would increase the overall use of virtualization.
DiDio said Yankee Group research shows 96 percent of all organizations plan to adopt virtualization, and one-third of them will use multiple virtualization products to get the exact array of capabilities they need. Standards-based interoperability will thus be critical, she explained.
In addition to enabling interoperability, OVF provides security by attaching digital signatures to virtual machines. This verifies that the OVF file actually came from the indicated source and lets users' systems determine whether anyone has tampered with the VMs.
There is no firm timeline for adopting and implementing the OVF yet. "I'm optimistic that we'll see something in the first half of 2008," said Winston Bumpus, DMTF president and VMware's director of standards architecture.
The task force is also working on standards for other aspects of virtualization, added Bumpus.
Anand Agrawala has something for computer users who like their work in piles rather than files.
Agrawala, cofounder and CEO of BumpTop, has created a computer desktop environment that lets users place their work in files or piles, represented by graphics that make the environment look like a real desktop. They can also manipulate and move their material and convert it from piles to files and back again, just as they would with physical items.
This enables them to function more comfortably by letting them handle their computer documents in the same way they work with their physical documents, explained Agrawala, who created the new interface during his master's degree studies at the University of Toronto.
The interface also gives the desktop a third dimension, which provides more information density than conventional environments.
To add functionality to the standard desktop, Agrawala used gaming technology. For example, the system employs the Ageia PhysX physics engine to handle the graphics in a stable manner.
BumpTop uses a gaming engine to run the complex mathematics that controls the movement of documents on the desktop and renders them properly. "Say you want to move an object by throwing it into the corner. The math determines how fast an object will slow down and the friction that will come into play," Agrawala explained.
He said the BumpTop software works in the background when other applications are open and does not unduly tax the host computer's CPU.
BumpTop hooks into the operating and file systems so that it accurately reflects when files are created, moved, or modified.
Although the system currently works only with Windows, Agrawala said he plans to make it support multiple platforms.
Agrawala said he is still working on a business model for BumpTop and is developing an alpha version of the system. He plans to release a finished product in the near future.Anand Agrawala (right), cofounder and CEO of BumpTop, demonstrates his new computer desktop environment, which lets users place their work in files or piles, shown by graphics that make the interface look like a real desktop. Users can also manipulate and move their material, just as they would with physical items.