Issue No.09 - September (2004 vol.37)
Published by the IEEE Computer Society
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MC.2004.140
Companies Make USB Interconnects Wireless
A group of companies is developing a wireless version of the popular high-speed, host-to-device, universal serial bus (USB) interconnect standard. This would add wireless technology's mobility and convenience to an interconnect approach widely used with, for example, PC peripherals, handheld devices, and consumer electronics.
To advance the new approach, companies such as Agere Systems, Hewlett-Packard, Intel, Microsoft, NEC, Philips Semiconductors, and Samsung Electronics formed the Wireless USB Promoter Group.
The group has already begun defining a specification with a bandwidth of 480 Mbits per second, the same speed as wired USB 2 and much faster than USB 1's 2 Mbits per second. Wireless USB can transmit data over 3 meters at peak speeds and up to 10 meters at lower speeds. Wired USB's range is the length of a cable, which is typically up to 1.5 meters.
Wireless USB is also energy efficient—which would help conserve mobile devices' battery life.
The technology provides a cable-free environment for many portable devices, such as MP3 music players and video recorders, which now must connect via a cable to transfer data to another machine, said Dave Thompson, wireless-vendor Agere's technical manager for technology, strategy, and standards and the company's Wireless USB Promoter Group representative.
Portable devices no longer must be within a cable's length of another machine to communicate with it, thereby making connectivity considerably more flexible. This would permit, for example, elements of a home theater system to be farther apart, making implementation more convenient.
Except for utilizing mobile technologies, wireless USB uses basically the same protocol, architecture, device drivers, and driver infrastructure as its wired counterpart, thereby permitting a smooth migration path.
Wireless USB will be based on ultrawideband, a low-power, short-distance, wireless technology for transmitting large amounts of data over a wide spectrum of frequency bands. Wireless USB will operate via a UWB radio that uses orthogonal frequency-division multiplexing (OFDM) to achieve high bandwidth.
Potentially, wireless USB transmissions could be intercepted. However, the technology will include encryption, so intercepted transmissions could not be easily viewed.
According to Brad Hosler, Intel's wireless USB engineering manager, companies want to implement additional security but have not determined how to do so. One possible approach involves using a portable flash-memory card for storing passwords and other sensitive information. Removing the card would keep the sensitive information from intruders. Hostler said private- and public-key encryption schemes are too expensive for wireless USB implementations.
Hosler said the Wireless USB Promoter Group should finish the specification, which eventually will be managed by the USB Implementers Forum, by the end of this year, and products using the technology should be out within another 12 months.
Industry observers expect the first wireless USB implementations to be stand-alone chips, chipsets, or perhaps PCI or other add-in cards. According to Thompson, they could also be dongles and adapter cards.
In the marketplace, said senior analyst John Jackson of the Yankee Group, a market research firm, wireless USB will compete with Bluetooth, a slower short-range connectivity technology.
Wireless USB's success will depend on vendor support, Jackson noted, and existing backing from big companies like Intel and Samsung should help.
Using a Pen to Draw Smaller Chip Features
Scientists have developed a new nanolithography technique that uses a pen-like device to deposit small amounts of material on silicon. Manufacturers eventually could use the dip-pen technique to draw tiny features on chips.
Smaller feature sizes would let manufacturers add more transistors and other circuitry to processors. This would enable chips that are more powerful than their predecessors without being bigger or that maintain the same power levels while getting smaller.
Reducing feature sizes is a key goal for chip makers, noted Carl D. Howe, cofounder of and analyst at Blackfriars Communications, a market-research and consulting firm.
Northwestern University's new lithography approach uses pens with sharp microscopic tips 20 nanometers in diameter. The tips, which can be coated with polymers or other materials, draw lines of molecules on a surface such as silicon or glass. The lines act like the resists currently used in chip fabrication, explained Northwestern Professor Chad A. Mirkin, director of the school's Institute for Nanotechnology. The system's sharp tip enables much smaller features than the light used in traditional photolithography, which is limited by wavelength sizes, Mirkin said.
With dip-pen nanolithography, feature sizes can be as small as 10 to 15 nanometers. Today's commercial photolithographic techniques, which use light passed through patterns on chrome masks placed over substrates, generally produce feature sizes only as small as 65 nanometers. In addition, Mirkin noted, his technique has significantly lower instrumentation costs than traditional chip lithography.
And the approach is precise because it places the pen's tip within a few tenths of a nanometer of the surface on which it is writing. The system condenses a tiny water droplet from the surrounding air and uses it as an accurate transport channel for the deposited material, explained Mirkin, who founded NanoInk Inc. to commercialize the technology. Early applications, he said, will use the pen's tiny size to find small defects in surfaces. This will make the technology suitable for creating and repairing today's lithographic masks and for fixing flat-panel displays.
Dip-pen lithography presently works with only a relatively small number of pens at one time, which makes it slow for making chips with millions of transistors and circuits. The company is thus raising money to scale up its technology and has already developed prototypes that can write with a million tips at one time.
New Technology Promises to Reduce Internet Congestion
The Internet Engineering Task Force is working on a technology designed to ease the network congestion that data-intensive, rich-media applications such as Internet telephony and videoconferencing could cause if widely adopted.
The IETF is developing the Datagram Congestion Control Protocol, which provides TCP-like congestion control for applications based on the User Datagram Protocol, such as streaming video and voice.
UDP, which can be used instead of TCP in IP networks, doesn't provide congestion control or many error-recovery mechanisms. For example, when a network is congested, a UDP application will continue to send high-rate data, which can force other applications off the network.
UDP is preferred for rich-media applications because TCP's error-correction techniques involve data retransmission. This can cause extreme delays, which are less acceptable than data loss for many media transmissions, which must play smoothly.
"DCCP provides a framework for various forms of congestion control for unreliable datagram applications," explained UCLA Assistant Professor Eddie Kohler, who is also chief scientist at Mazu Networks, a network-security vendor.
DCCP has two congestion-control approaches: quick-reacting TCP-like congestion control and slower acting TCP-friendly rate control. Both respond to congestion by reducing the amount of data transmitted. They then increase the data rate once the congestion ends.
DCCP could be implemented in specific applications or, more generally, in operating systems. An OS could use DCCP as the transport protocol to increase or decrease an application's data-transmission rate in response to changing network conditions such as available bandwidth levels.
This would ensure that UDP applications don't use excessive amounts of bandwidth and cause network congestion. DCCP would also protect against denial-of-service and some other types of attacks via mechanisms that offload work from the server to the client and then deny hackers the server's resources.
A problem with DCCP is that developers would have to modify applications to work with the technology, Kohler explained. It would take a lot of effort for developers to include DCCP as the transport layer in software, said Tim Dwight, chief technical officer for telecommunications-equipment-vendor Marconi Communications' Broadband Routing and Switching Unit.
Also, Dwight said, DCCP would sometimes slow data streams for users of rich media applications who currently don't experience such slowdowns. However, he added, DCCP adoption would benefit the Internet as a whole.
The IETF plans to have a preliminary DCCP specification ready by the end of this year.