Issue No.04 - April (2006 vol.39)
Published by the IEEE Computer Society
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MC.2006.136
<p><strong>Researchers Develop New Chip-Making Technique</strong><div>Linda Daily Paulson</div></p><p>Scientists have developed an approach for extending current chip-making techniques so that manufacturers can produce semiconductors with smaller feature sizes without expensively retooling their fabrication plants.</p><p><strong>Pay-to-Play E-Mail Plan Sparks Controversy</strong><div>Linda Dailey Paulson</div></p><p>Protests have arisen over a plan by AOL and Yahoo to test a service that lets e-mail senders pay for guaranteed message delivery without subjecting them to spam filtering.</p><p><strong>Hackers Strengthen Malicious Botnets by Shrinking Them</strong><div>Linda Dailey Paulson</div></p><p>To make their attacks more effective by making them harder to detect, hackers are scaling back the size of the networks of infected computers they use to launch malware, denial-of-service, spam, or phishing campaigns.</p>
Researchers Develop New Chip-Making Technique
Scientists have developed a proof-of-concept approach for extending current chip-making techniques so that manufacturers can produce semiconductors with smaller feature sizes without spending millions of dollars to radically retool their fabrication plants to accommodate different techniques.
IBM and JSR Micro, which supplies custom materials for the semiconductor- and electronic-device-fabrication industries, developed the new technique. It uses advanced lenses and new materials to create chips with feature sizes of 29.9 nanometers and, eventually, even smaller. Current microprocessors generally have 90-nm feature sizes.
Smaller feature sizes would let manufacturers pack more transistors onto chips, thereby increasing their power without making them larger.
The new approach would extend the current techniques of using argon-fluoride lasers, as well as deep ultraviolet and high-index X immersion lithographies, to produce circuitry patterns on the photoresist that sits on the silicon.
Immersion lithography typically uses water, which has a 1.43 refracting index. The refracting index measures how much a light wave slows when passing through a liquid or lens. Light passing through a high-index material has a shorter wavelength, which lithography can tightly focus, thereby yielding finer feature sizes. Thus, the argon-fluoride laser, which has a 193-nm wavelength, can generate small feature patterns by passing through a lens and a liquid before reaching the photoresist.
Using different liquids and lens materials could increase the overall refractive index, which would enable smaller feature sizes using today's lithography techniques, explained Mark Slezak, technical manager of JSR Micro's lithography group.
For the new approach, JSR supplied an organic liquid, which they declined to identify, that has a 1.64 refractive index. In addition, IBM used higher-density quartz lenses with a refractive index of 1.67, up from current lenses' 1.56.
"This shows that several more generations of immersion lithography are possible," said Bob Allen, manager of lithography materials for IBM's Almaden Research Center.
Chip experts had predicted that present lithography approaches working with current materials—which create microprocessors with 110-, 90-, and 65-nm features—would be unable to draw circuit patterns less than 40-nm, requiring a shift to far different techniques.
Aaron J. Hand, managing editor of Semiconductor International magazine, said the new manufacturing technique would still require chipmakers to make some changes to their fabrication plants and buy new tools. The technique's success will thus depend on how costly it turns out to be and whether IBM and JSR can make it work as more than just a proof-of-concept approach, Hand noted.
In perhaps six or seven years, he continued, manufacturers will still have to move to next-generation techniques, such as extreme UV lithography, that would require extensive and expensive retrofitting of fabs.
Pay-to-Play E-Mail Plan Sparks Controversy
Protests have arisen over a plan by AOL and Yahoo to test a service that lets e-mail senders pay for guaranteed delivery of messages, without subjecting them to spam filtering.
AOL and Yahoo say the plan is a good way to offer a higher level of desirable services to parties interested in paying for them, as many businesses do.
Numerous opponents, such as the Electronic Frontier Foundation and the Spamhaus Project, characterized the plan as a move away from the Internet's longtime free, neutral, and open culture by creating one that is financially stratified.
AOL and Yahoo will use Goodmail Systems' technology to implement the plan for messages that are going to recipients within the two ISPs. Goodmail won't let companies pay for the additional service unless they have the consent of the consumers to whom they want to send mail, said Charles Stiles, AOL's postmaster and senior technology manager for mail operations.
For each message the new system handles, the sender submits to Goodmail a hash code, as well as information such as recipients' e-mail addresses and the sender's Goodmail identification number, explained Daniel T. Dreymann, the company's cofounder and senior vice president of product, engineering, and operations.
Goodmail then sells the senders cryptographic tokens and identifying codes to embed in messages. The tokens are linked to specific senders' e-mail addresses to prevent spoofing.
AOL and Yahoo will install software to verify whether a message has a token and has thus been certified by Goodmail. If so, it will be sent without passing through spam filters.
Senders—except for nonprofit organizations, which could use the service for free—would pay Goodmail between 0.25 and 1 cent per token, depending on the amount of mail they send via the program each month. AOL and Yahoo would receive some of the revenue.
The recipient will see a logo in the incoming mail list, and in the window in which the mail is read, denoting that an incoming message has passed through Goodmail's system. The logo will not appear in the body of a message, where it could be forged, Stiles said.
AOL plans to implement the system this spring; Yahoo will do so later this year.
Dreymann said recipients can notify Goodmail if they receive an e-mail via the company's system that they don't want. Goodmail will use this data to calculate a score reflecting whether a sender is distributing too much unwanted e-mail. "If a sender's score goes beyond a threshold, the sender is warned, then put on probation, and ultimately kicked out," explained Dreymann.
Goodmail's system would complement spam filters by making senders more discriminating about choosing the recipients to whom they pay to send e-mail.
The plan means only that "AOL and Yahoo users will get higher-quality corporate spam from people who wish to pay for it," said Paul Heller, president and founder of Heller Information Services, an ISP for US governmental agencies. Heller called the practice "repulsive."
However, said John Levine, chair of the Anti-Spam Research Group of the Internet Research Task Force, some type of certification for transactional e-mail makes sense. A few firms are already doing this, including Habeas, on whose board Levine sits.
Nonetheless, said Levine and Heller, the plan by AOL and Yahoo will do nothing about spam that already gets through filters.
Hackers Strengthen Malicious Botnets by Shrinking Them
To make their attacks more effective by making them harder to detect, hackers are scaling back the size of the networks of infected computers they use to launch malware, denial-of-service (DoS), spam, or phishing campaigns.
ISPs have more trouble finding and thus countering smaller botnets of these zombie computers than they had with the huge networks that attackers previously used. This enables the botnets to operate and cause problems for a longer period of time.
The size of bot networks peaked in mid-2004, with many using more than 100,000 infected machines, according to Mark Sunner, chief technology officer of MessageLabs, a provider of messaging security and management services. The average botnet size is now about 20,000 computers, he said.
Nonetheless, noted Aaron Hackworth, Internet security analyst with the CERT Coordination Center, an Internet security organization, his organization continues to see some large botnets.
Hackers prefer using botnets for adware, spyware, and DoS attacks because they use many machines and can thus launch large-scale campaigns.
In typical campaigns, hackers infect computers—usually home PCs, which frequently use always-on broadband Internet connections and are generally less secure than enterprise machines—with malware. The malware often connects to Internet relay chat (IRC) systems, turning computers into zombies waiting for orders to execute without their owners' knowledge.
Botnet administrators are careful about when they access, update, or otherwise use their zombies because the activities attract ISPs' attention, noted Dean Turner, Symantec Security Response's senior manager.
ISPs monitor traffic flow on their networks looking for large-scale anomalous patterns that indicate signs of problems, such as more machines than usual connecting to IRC servers or regularly visiting the same Web site, noted Hackworth.
Upon detecting problems, ISPs can stop suspicious transmissions and notify security vendors. That starts a race against time to get a sample of the malware and develop a way to stop the infection that enables the botnet attacks, explained Sunner.
However, ISPs might not quickly notice a relatively small number of computers acting in concert. This delay can give botnets considerable time to cause damage.
News Briefs written by Linda Dailey Paulson, a freelance technology writer based in Ventura, California. Contact her at email@example.com.