Issue No.02 - February (2011 vol.44)
Published by the IEEE Computer Society
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MC.2011.50
News briefs on current topics.
IBM Project Proposes Using Light to Make Chips Faster
IBM has developed an optical technology that could speed up data transmissions among servers, between chips within a system, or on the chips themselves.
The company's SNIPER (silicon nanoscale integrated photonic and electronic transceiver) project uses CMOS integrated nanophotonic technology. This approach employs pulses of light, rather than copper connections, to move data at higher speeds, explained IBM silicon nanophotonics manager Yurii Vlasov.
He likened this to the replacement of copper telephone wiring in the 1970s with fiber, which provides faster, higher-capacity, and more efficient connections.
Some interconnects already use optical technology. Current approaches employ parallel optical interconnects using a single wavelength to carry data, noted William Green, research staff member with IBM's Silicon Nanophotonics Group.
IBM's approach uses wavelength division multiplexing, which is faster because numerous data streams can be carried on multiple wavelengths simultaneously within a single fiber. IBM integrates the photonics and the CMOS electronics circuitry on the same chip during the manufacturing process, rather than afterward. This approach is less expensive, yields more chips, and avoids damaging parts during assembly.
The chip includes a transmitter, receiver, optical modulator, photo detector, amplifiers, and other elements. Their integration makes the manufacturing process easier and less costly, and enables smaller interconnects.
The circuitry surrounding the receiver uses a germanium photodetector atop the waveguide to detect the photons in the optical signals and create an electrical current. This is necessary because computers can process the signals only in electrical form. The system subsequently passes the signal through a modulator to convert the processed data into optical form, which again enables a higher signaling rate.
Optics also make interconnects more power-efficient because they avoid the signal resistance and distortion—and the required energy-consuming corrective measures—inherent in using copper wire.
According to Vlasov, the CMOS integrated nanophotonic circuitry could be manufactured on a standard chip production line without special tools, making it cost-effective to produce in large quantities.
IBM is developing the SNIPER chip in 130- and 65-nm feature sizes, noted Jag Bolaria, senior analyst with the Linley Group, a market research firm.
Supercomputing is the IBM technology's target application. The approach could increase supercomputers' data rates 1,000 times to one exaflops (one quintillion floating-point operations per second). SNIPER's high communication bandwidth could be ideal for mainframes and servers, many of which are moving toward high-performance computing, noted Vlasov.
IBM hopes to build an exaflops computer by 2020. Today's fastest supercomputers perform about 1 petaflops (one quadrillion flops). IBM's interconnect technology eventually could also be used in applications such as campus-wide switching and consumer electronics.
SNIPER is a good, long-term project for implementing photonics on CMOS silicon, said Bolaria.
IBM's approach won't be ready for commercialization for several years, said Vlasov, with products using the interconnect technology available by about 2017.
Bolaria questioned whether IBM can continue investing in the technology without seeing a return on investment for so long. Also, he added, CMOS photonics have a limited transmission range and thus would be practical for applications such as supercomputing but not for the lucrative telecommunications market.
Luxtera is developing a 130-nm silicon-photonics approach, he noted, but it's designed primarily for active optical cables.
Testing Tool Finds Software Bugs Efficiently
The US National Institute of Standards and Technology has released a testing tool designed to cut costs by finding software flaws more efficiently than similar approaches.
NIST computer scientist Rick Kuhn said the Advanced Combinatorial Testing System (ACTS) tool is important because software-implementation errors contribute significantly to information-system security vulnerabilities. NIST has reported that inadequate software testing costs the US economy billions of dollars per year despite the allocation of considerable resources to testing.
Kuhn noted that testing all possible combinations of system inputs and execution paths is impossible. Software needing a lot of assurance requires extensive, expensive testing. Less rigorous testing can lead to software errors that could cause system failures and security vulnerabilities.
Combinatorial testing like that used in ACTS can reduce the cost and increase the effectiveness of testing for many applications, according to Kuhn.
Not every input variable contributes to every software or hardware failure, he noted. In fact, he said, NIST research suggests the majority of software failures are triggered by only two variables interacting, and almost all failures are caused by interactions among no more than six parameters. Thus, he explained, testing combinations of these relatively few parameters can provide highly effective fault detection.
Until recently, combinatorial testing has been limited to combinations of two variables at a time due to a lack of algorithms for effective four- or six-way testing. However, two-way testing can miss 10 to 40 percent of a system's bugs, which is inadequate for mission-critical software because it can fail to catch complex faults, Kuhn said.
ACTS, which NIST distributes for free (contact email@example.com), can efficiently test combinations of two to six interacting variables. It uses the In-Parameter-Order-General algorithm that University of Texas at Arlington associate professor Jeff Lei developed. Kuhn said ACTS can compute results in a few seconds and thus is considerably faster than previous combinatorial-testing algorithms.
Users can employ the tool with various types of testing methods. Combinatorial testing would be particularly useful for software utilized in electronic commerce, databases, communications, or other applications with many input values, Kuhn said.
Combinatorial testing offers several benefits such as a 30 to 40 percent reduction in the time needed to select and document test cases and a doubling of the number of defects found per tester hour, said Justin Hunter, CEO of software-testing provider Hexawise.
Kuhn said about 550 organizations have downloaded ACTS. Lockheed Martin is trying it on some software projects as part of a long-term effort to improve its testing program, which includes evaluating combinatorial approaches, explained Jon D. Hagar, a senior system engineer with the company.
News Briefs written by Linda Dailey Paulson, a freelance technology writer based in Portland, Oregon. Contact her at firstname.lastname@example.org.
MIT Develops Network-Intrusion Recovery System
MIT researchers have developed a system designed to make it easier for systems to recover from security breaches.
The RETRO system, developed at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), lets administrators find and undo offending actions that a hacker attack has caused, explained assistant professor Nickolai Zeldovich.
RETRO helps address attacks that hackers implement stealthily and that could cause damage not readily apparent to a network administrator. The system doesn't recognize attacks. Administrators themselves must first identify an attack and when it started by, for example, using intrusion-detection tools.
RETRO takes periodic snapshots of the system state and records a log of all system activities—called an action history graph—that captures the effects and dependencies of each action. This lets an administrator understand what problems an intrusion causes.
The MIT application undoes the action that caused the intrusion, as specified by the administrator, and all of its side effects, as determined via the action history graph.
RETRO—not an acronym but an indication that the technology retroactively undoes harmful operations—works by constructing new system states. In essence, it selectively rolls back affected files to states that existed before an attack caused problems and re-executes computations whose inputs were affected.
Unlike system-restore and similar applications, RETRO selectively preserves legitimate actions that occur after an attack. For example, Zeldovich noted, if a hacker created a user ID once in the system, RETRO would revert the account information and passwords to a point prior to the attack but maintain legitimate accounts created afterward.
Via the action history graph, the administrator identifies the first attacker-caused problem. RETRO then recognizes actions caused or influenced by the attack and re-executes them safely. The system then re-executes legitimate actions in the same way they originally occurred.
Researchers have tested RETRO via a honeypot, a decoy system put on a network as bait for attackers. According to the researchers, a RETRO prototype for Linux helped systems recover from a mix of 10 real-world and synthetic attacks, with only two requiring significant administrator participation.
Zeldovich noted that RETRO has had problems restoring information in some types of files.
Also, the technology works based on the assumption that an attacker hasn't compromised the operating system kernel. If this occurs, a hacker could tamper with the action history graph and thereby hide problems from RETRO. Zeldovich said his team is working on extensions that will identify and account for kernel-related issues.
Researchers want to turn RETRO—which currently runs on individual machines—into a research prototype capable of operating on a large scale, such as on computer clusters or large websites.
Zeldovich said it will be several years before RETRO is ready for commercialization.