April 2011 (Vol. 44, No. 4) pp. 18-21
0018-9162/11/$31.00 © 2011 IEEE
Published by the IEEE Computer Society
Published by the IEEE Computer Society
|Some Users Find the Speed of Light Too Slow for Their Networks|
|Michigan Researchers Advance Millimeter-Scale Computing|
PDFs Require Adobe Acrobat
Some Users Find the Speed of Light Too Slow for Their Networks
Even light-speed fiber-optic networks experience delays that can prove costly for time-sensitive transactions. To determine how to mitigate latency, MIT researchers investigated how network delays affect performance. Their findings revealed that using an intermediate node can help shave time off the communications between two nodes.
Financial firms in particular want to minimize latency, even if by tens of milliseconds. Because neither the speed of light nor the distance between two points changes, network administrators must determine how to deal with light-speed latency.
Researchers Alex Wissner-Gross, founder and chief scientist at Enernetics and a research affiliate at the MIT Media Lab, and Cameron Freer, a junior researcher in the Department of Mathematics at the University of Hawaii at Manoa, created a general formula organizations can use to reduce latency by deriving the best intermediate location between two different information sources.
Communication can never be fast enough for traders, says Steve Rubinow, chief information officer at NYSE Euronext. There's a perception that any slight increase in speed can result in a huge financial advantage, he adds, but there are only so many things that can be done to increase speeds.
Initially, the researchers focused on developing an approach to cope with latency. They looked at any communication requiring the coordination of two tasks in which the data disperses over time and is unpredictable. Such so-called noisy tasks are common throughout the financial industry, but especially in trading, where stock or commodity prices continually fluctuate
The human element is wholly removed from high-frequency trading, which consists of servers talking to servers during the trading day. The buying and selling of equities and derivatives, which are particularly latency-sensitive assets, take place in microseconds. However, Rubinow says customers want those transactions to happen even faster. Corvil, which provides latency management systems for high-performance trading, announced in late February 2011 that it can now track its network latency in nanoseconds (billionth of a second). Days later, Donal Byrne, the company's chief executive, stated that he expects trading to eventually reach picosecond speeds.
Where the spreads between prices of these assets used to be nickels and dimes, says Rubinow, they're now pennies or less. Traders make up the difference by making transactions in volume, and these smaller spreads provide more opportunities for arbitrage.
In high-frequency trading, light propagation delays are, in many cases, the largest limiting factor preventing traders from immediately exploiting arbitrage opportunities, Wissner-Gross says. The researchers factored all possible sources of latency into the formula, including equipment latencies.
In addition to identifying a geographically optimal point between two locations, the formula also factors in the speed at which price fluctuations return to normal, which is influenced by larger market volumes. For trades between two points, say New York and London, the communication would be weighted toward locating an intermediate node closer to New York because of the exchange's volumes.
Using this formula, the researchers triangulated the best locations for locating servers that might be used for high-frequency trading across 52 exchanges worldwide. Because they examined potential trading between two specific exchanges, as Figure 1 shows, no one region was heavily favored compared to another. Trading between New York and London, for example, would benefit from a datacenter located in either Nova Scotia or Iceland, Wissner-Gross explains.
Freer adds that the problem is that money can be lost if traders don't see an opportunity as soon as it's presented, especially when information is quickly generated and useful only for an instant. In high-frequency trading, which is conducted wholly using software, trades are conducted at intervals greater than one second based on preprogrammed strategies.
The financial sector was selected as a case study for this research because there's a powerful financial motive for improvement, but Wissner-Gross says this concept also can be more broadly applied to helping make the Internet faster. Freer adds that firms can use this tool in the near term to give them a better look at their current network operations, especially given the cost associated with constructing a new datacenter. Examining these possible intermediate sites, he says, might result in the discovery of location as a new natural resource, especially for areas ideally suited for low-latency datacenter construction.
Firms with datacenters can use the formula to determine the securities or derivatives they're best positioned to trade based on their current locations as well as to make subtle changes that can improve what Wissner-Gross calls the correlations between networks.
Although NYSE Euronext hasn't spent much time examining latency issues, Rubinow believes traders are eager to solve these issues. One company recently completed construction of a low-latency line for traders between the New York and Chicago exchanges that shaved three milliseconds from the typical networking speeds. He says there are traders willing to pay for such a perceived advantage.
Wissner-Gross says the MIT researchers have been working with several different firms interested in using their formula, including some building a low-latency network infrastructure for the financial industry.
Michigan Researchers Advance Millimeter-Scale Computing
Newly announced technologies developed at the University of Michigan represent a significant push for millimeter-scale computing systems and ubiquitous computing.
Researchers devised an implantable glaucoma sensor as well as an integrated antenna and radio. The sensor is a complete system in a cubic millimeter package, while tiny computer systems can use the integrated antenna for communication. David Blaauw, a professor of electrical engineering and computer science who is working on the sensor electronics, says that each ongoing research project constitutes an important milestone for computing at this scale.
Millimeter-scale systems, used to enable ubiquitous or pervasive computing, aren't formally defined, but they should be complete computing systems in which all the components are low-power and fit on one chip, including the radio and power source. David Wentzloff, a professor at the university whose group is working on the integrated antenna and radio, says that beyond size, it's also important for such systems to perpetually operate, requiring them to harvest light to operate or recharge.
Blaauw says the intraocular pressure sensor is the first complete millimeter computing system. This implantable eye pressure monitor is designed to track glaucoma by taking a patient's corneal pressure readings every 15 minutes. Irregular pressure indicates glaucoma, which deteriorates nerves at the back of the eye. The system consists of an ultra-low-power microprocessor, a pressure sensor, memory, a thin-film battery, a solar cell, and a wireless radio all contained within a cubic millimeter package. Also in the device is a timer that controls when the processor takes measurements; circuitry to manage device power, including converting the solar-gathered energy to a charge the system can use; and an analog-to-digital converter for the data.
The computer's memory retains data for a week. To retrieve stored information, the doctor or patient holds an external device near the eye that wakes the sensor and reads the data. Blaauw says the implant could eventually be designed to automatically communicate directly with a physician's office using wireless technology.
A third-generation Phoenix chip, a processor designed by Blaauw's group that features a unique power-gating architecture and an extreme sleep mode to achieve ultra-low-power consumption, is used in the system. Blaauw says only the memory and timer run in sleep mode. The system also shuts off the solar cell's charging capability to prevent battery drain.
The sensor's average power consumption is 5.3 nanowatts. Keeping the battery charged requires exposure to either 10 hours of indoor light or 1.5 hours of sunlight. Because the sensor is implanted in the eye, no energy is harvested when the patient is napping.
Currently, the sensor uses an asymmetric radio coil on the chip that talks to a larger external device. To be fully contained, the system needs a symmetric radio to communicate with external devices or other sensors.
The newly published research by Wentzloff's group is a proof of concept of a radio with an integrated antenna that can be used in a sensor node. Blaauw says that future iterations of his group's work on glaucoma sensors will use that technology.
Wentzloff says that although other research demonstrates that antennas can be made using a CMOS process, this work proves that the entire antenna, including the radio's electronics, can be made using CMOS technology. The system has room beneath the antenna for the electronics, which saves on-chip area. Integration also lowers manufacturing costs. Typically, radios need a crystal to generate the radio frequency, but these are large and power hungry. The antenna, made in metal using CMOS technology, is also self-tuning. Added circuitry monitors the signal. The radio uses a feedback loop to self-tune to the antenna's specific frequency.
The communication distance of these radios is so short that they might not talk directly to a hub, but they can communicate with an adjoining sensor node that can ultimately relay information hop-by-hop to the network's edge or beyond. The radios could be used in networks for either relaying measurements or coordinating network responses based on data gathered, Wentzloff says. These radios might also communicate with a device with more resources such as a cell phone.
Both research groups are con-tinuing to refine these technologies, which face various federal approvals along the path to commercialization. Blaauw says the glaucoma implant is about to enter animal testing. Other researchers are interested in using the sensor for additional types of medical applications, including monitoring intracranial pressure and changes in tumor size during chemotherapy, as well as microbial fuel cells. But first, they're working on unresolved issues including extending the implant's battery life to at least six weeks.
Once ubiquitous computing becomes mainstream, Blaauw says, the devices needed in the new market would be sufficiently numerous to fuel the semiconductor industry's growth as each user would need tens or thousands of millimeter systems.
News Briefs written by Linda Dailey Paulson, a freelance technology writer based in Portland, Oregon. Contact her at email@example.com.