Pages: pp. 18-21
Even light-speed fiber-optic networks experience delays that can prove costly for time-sensitive transactions. To determine how to mitigate latency, MIT researchers investigated how network delays affect performance. Their findings revealed that using an intermediate node can help shave time off the communications between two nodes.
Financial firms in particular want to minimize latency, even if by tens of milliseconds. Because neither the speed of light nor the distance between two points changes, network administrators must determine how to deal with light-speed latency.
Researchers Alex Wissner-Gross, founder and chief scientist at Enernetics and a research affiliate at the MIT Media Lab, and Cameron Freer, a junior researcher in the Department of Mathematics at the University of Hawaii at Manoa, created a general formula organizations can use to reduce latency by deriving the best intermediate location between two different information sources.
Communication can never be fast enough for traders, says Steve Rubinow, chief information officer at NYSE Euronext. There's a perception that any slight increase in speed can result in a huge financial advantage, he adds, but there are only so many things that can be done to increase speeds.
Initially, the researchers focused on developing an approach to cope with latency. They looked at any communication requiring the coordination of two tasks in which the data disperses over time and is unpredictable. Such so-called noisy tasks are common throughout the financial industry, but especially in trading, where stock or commodity prices continually fluctuate
The human element is wholly removed from high-frequency trading, which consists of servers talking to servers during the trading day. The buying and selling of equities and derivatives, which are particularly latency-sensitive assets, take place in microseconds. However, Rubinow says customers want those transactions to happen even faster. Corvil, which provides latency management systems for high-performance trading, announced in late February 2011 that it can now track its network latency in nanoseconds (billionth of a second). Days later, Donal Byrne, the company's chief executive, stated that he expects trading to eventually reach picosecond speeds.
Where the spreads between prices of these assets used to be nickels and dimes, says Rubinow, they're now pennies or less. Traders make up the difference by making transactions in volume, and these smaller spreads provide more opportunities for arbitrage.
In high-frequency trading, light propagation delays are, in many cases, the largest limiting factor preventing traders from immediately exploiting arbitrage opportunities, Wissner-Gross says. The researchers factored all possible sources of latency into the formula, including equipment latencies.
In addition to identifying a geographically optimal point between two locations, the formula also factors in the speed at which price fluctuations return to normal, which is influenced by larger market volumes. For trades between two points, say New York and London, the communication would be weighted toward locating an intermediate node closer to New York because of the exchange's volumes.
Using this formula, the researchers triangulated the best locations for locating servers that might be used for high-frequency trading across 52 exchanges worldwide. Because they examined potential trading between two specific exchanges, as Figure 1 shows, no one region was heavily favored compared to another. Trading between New York and London, for example, would benefit from a datacenter located in either Nova Scotia or Iceland, Wissner-Gross explains.
Figure 1 Optimal locations for high-frequency trading nodes are shown as small dots. These best intermediate locations for all exchange pairs worldwide were calculated using a formula from MIT researchers. The large circles represent the 52 major securities exchanges. The researchers based the data on information reported by the World Federation of Exchanges in 2008. While some nodes' ideal locations are in dense fiber-optic network areas, many others are in the ocean or in sparsely connected regions.
Freer adds that the problem is that money can be lost if traders don't see an opportunity as soon as it's presented, especially when information is quickly generated and useful only for an instant. In high-frequency trading, which is conducted wholly using software, trades are conducted at intervals greater than one second based on preprogrammed strategies.
The financial sector was selected as a case study for this research because there's a powerful financial motive for improvement, but Wissner-Gross says this concept also can be more broadly applied to helping make the Internet faster. Freer adds that firms can use this tool in the near term to give them a better look at their current network operations, especially given the cost associated with constructing a new datacenter. Examining these possible intermediate sites, he says, might result in the discovery of location as a new natural resource, especially for areas ideally suited for low-latency datacenter construction.
Firms with datacenters can use the formula to determine the securities or derivatives they're best positioned to trade based on their current locations as well as to make subtle changes that can improve what Wissner-Gross calls the correlations between networks.
Although NYSE Euronext hasn't spent much time examining latency issues, Rubinow believes traders are eager to solve these issues. One company recently completed construction of a low-latency line for traders between the New York and Chicago exchanges that shaved three milliseconds from the typical networking speeds. He says there are traders willing to pay for such a perceived advantage.
Wissner-Gross says the MIT researchers have been working with several different firms interested in using their formula, including some building a low-latency network infrastructure for the financial industry.
Newly announced technologies developed at the University of Michigan represent a significant push for millimeter-scale computing systems and ubiquitous computing.
Researchers devised an implantable glaucoma sensor as well as an integrated antenna and radio. The sensor is a complete system in a cubic millimeter package, while tiny computer systems can use the integrated antenna for communication. David Blaauw, a professor of electrical engineering and computer science who is working on the sensor electronics, says that each ongoing research project constitutes an important milestone for computing at this scale.
Millimeter-scale systems, used to enable ubiquitous or pervasive computing, aren't formally defined, but they should be complete computing systems in which all the components are low-power and fit on one chip, including the radio and power source. David Wentzloff, a professor at the university whose group is working on the integrated antenna and radio, says that beyond size, it's also important for such systems to perpetually operate, requiring them to harvest light to operate or recharge.
Blaauw says the intraocular pressure sensor is the first complete millimeter computing system. This implantable eye pressure monitor is designed to track glaucoma by taking a patient's corneal pressure readings every 15 minutes. Irregular pressure indicates glaucoma, which deteriorates nerves at the back of the eye. The system consists of an ultra-low-power microprocessor, a pressure sensor, memory, a thin-film battery, a solar cell, and a wireless radio all contained within a cubic millimeter package. Also in the device is a timer that controls when the processor takes measurements; circuitry to manage device power, including converting the solar-gathered energy to a charge the system can use; and an analog-to-digital converter for the data.
The computer's memory retains data for a week. To retrieve stored information, the doctor or patient holds an external device near the eye that wakes the sensor and reads the data. Blaauw says the implant could eventually be designed to automatically communicate directly with a physician's office using wireless technology.
A third-generation Phoenix chip, a processor designed by Blaauw's group that features a unique power-gating architecture and an extreme sleep mode to achieve ultra-low-power consumption, is used in the system. Blaauw says only the memory and timer run in sleep mode. The system also shuts off the solar cell's charging capability to prevent battery drain.
The sensor's average power consumption is 5.3 nanowatts. Keeping the battery charged requires exposure to either 10 hours of indoor light or 1.5 hours of sunlight. Because the sensor is implanted in the eye, no energy is harvested when the patient is napping.
Currently, the sensor uses an asymmetric radio coil on the chip that talks to a larger external device. To be fully contained, the system needs a symmetric radio to communicate with external devices or other sensors.
The newly published research by Wentzloff's group is a proof of concept of a radio with an integrated antenna that can be used in a sensor node. Blaauw says that future iterations of his group's work on glaucoma sensors will use that technology.
Wentzloff says that although other research demonstrates that antennas can be made using a CMOS process, this work proves that the entire antenna, including the radio's electronics, can be made using CMOS technology. The system has room beneath the antenna for the electronics, which saves on-chip area. Integration also lowers manufacturing costs. Typically, radios need a crystal to generate the radio frequency, but these are large and power hungry. The antenna, made in metal using CMOS technology, is also self-tuning. Added circuitry monitors the signal. The radio uses a feedback loop to self-tune to the antenna's specific frequency.
The communication distance of these radios is so short that they might not talk directly to a hub, but they can communicate with an adjoining sensor node that can ultimately relay information hop-by-hop to the network's edge or beyond. The radios could be used in networks for either relaying measurements or coordinating network responses based on data gathered, Wentzloff says. These radios might also communicate with a device with more resources such as a cell phone.
Both research groups are con-tinuing to refine these technologies, which face various federal approvals along the path to commercialization. Blaauw says the glaucoma implant is about to enter animal testing. Other researchers are interested in using the sensor for additional types of medical applications, including monitoring intracranial pressure and changes in tumor size during chemotherapy, as well as microbial fuel cells. But first, they're working on unresolved issues including extending the implant's battery life to at least six weeks.
Once ubiquitous computing becomes mainstream, Blaauw says, the devices needed in the new market would be sufficiently numerous to fuel the semiconductor industry's growth as each user would need tens or thousands of millimeter systems.
News Briefs written by Linda Dailey Paulson, a freelance technology writer based in Portland, Oregon. Contact her at firstname.lastname@example.org.
University of Cincinnati engineers have shown that paper can be used as a display substrate and are moving toward the creation of a wholly disposable display.
The basic display mechanism is based on the electrowetting effect and is referred to as EW display technology. The EW process applies electric current to manipulate mixed fluids.
As Figure A shows, in the researchers' paper-based display, a metal layer is placed atop a small paper tube. The next layer is a dielectric material that is then topped with a fluoropolymer layer. Water and an oil-based dye are placed on the fluoropolymer layer. The competitive EW effect changes the color in each pixel, explains lead researcher Andrew J. Steckl. The researchers measured switching times on paper about as fast as displays with conventional glass substrates, which Steckl says indicates video rate display operation is possible.
Figure A Cross-section diagram shows how an actual electrowetting display array on paper, as developed by the University of Cincinnati, would look. The layers include oil/dye regions for individual pixels, a water layer, and a transparent cap layer atop a paper substrate.
They have experimented with various papers of different weights approximately 50 to 500 μm thick, made using different processes. These papers also have various surface finishes, which Steckl says enable the water's properties to change.
Paper is low cost, flexible, disposable, and sustainable. In addition, there's a great deal of knowledge about paper and it's a renewable material, whether made from tree pulp or cotton fiber. Steckl says these displays could be made using roll-to-roll processing, an established paper manufacturing method.
Paper substrates would reduce device complexity and cost and, because a display's substrate represents the bulk of the material that must eventually be disposed of, having a paper substrate will help reach a long-term goal of creating devices made either exclusively or at least primarily from biorenewable materials that are fully biodegradable.
Published work from Steckl's group, which is currently working on prototypes, states that they intend to create working EW displays on paper next, then determine their optimal operating conditions ( http://spie.org/x44337.xml?highlight=x2408&ArticleID=x44337).
Chris Chinnock, president of Insight Media, says EW is a "kissing cousin" to the commonly used e-ink technology, which uses electrophoretics. This competing display technology uses tiny capsules containing black and white particles that shift based on the application of electricity. Popular e-readers, including those from Amazon and Sony, use electrophoretic displays.
It's too early to assess whether a paper display is realistic, particularly one based on EW technology, which hasn't had the same market success as electrophoretic displays, says Norbert Hildebrand, senior analyst at Insight Media.
One form of EW that uses colored oils over transparent electrodes atop a white substrate was originally developed at Philips Research in the Netherlands ( Computer, Dec. 2003, pp. 24-26). Philips subsequently created Liquavista in 2006 to commercialize the technology; Samsung acquired the firm in late January 2011. Hildebrand says Samsung has a three- to five-year head start on commercializing EW with its recent purchase of Liquavista and its access to device manufacturing, adding that the University of Cincinnati group has at least three more years' work ahead. Even so, if they're building a completely biodegradable device, those technologies aren't yet available, making it more likely that it would be five to six years before such a display could be constructed.
Both Hildebrand and Chinnock say that once the researchers have good lab prototypes, they need to find manufacturing partners with the expertise needed to invest in and deliver real commercial products, which Chinnock says is possible.