Issue No.04 - Apr. (2014 vol.47)
Published by the IEEE Computer Society
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MC.2014.95
Topics include the biggest distributed denial-of-service ever measured, the IPv4 address shortage, the cooling of supercomputers by immersing them in liquid, a report naming Java as the leading malware target, a new headset that beams video directly into users' eyes, an ear-based computer controlled by facial movements, software that makes cloud operations more efficient, new technology that lets Wi-Fi users borrow others' bandwidth when needed, and a sensor that helps people grow plants better.
Topics include the biggest distributed denial-of-service ever measured, the IPv4 address shortage, the cooling of supercomputers by immersing them in liquid, a report naming Java as the leading malware target, a new headset that beams video directly into users’ eyes, an ear-based computer controlled by facial movements, software that makes cloud operations more efficient, new technology that lets Wi-Fi users borrow others’ bandwidth when needed, and a sensor that helps people grow plants better.
Experts Detect Biggest Distributed Denial-of-Service Attack Ever
A French security company was the target of the biggest distributed denial-of-service (DDoS) attack ever measured, with incoming traffic reaching just over 400 gigabits per second.
The cyberassault on CloudFlare ultimately slowed or denied Web access for hundreds of millions of people, particularly in Europe.
Initially, the data flood hit the Spamhaus Project—an organization that tracks spam-related activity—at rates of up to 90 gigabits per second. A Dutch communications service provider has reportedly taken credit for the incident, saying Spamhaus was abusing its influence as an antispam advocate.
Spamhaus hired CloudFlare to end the attack, but after setting up defenses, the security vendor itself became the target of a 100 gigabit-per-second assault.
When that didn't shut down CloudFlare's operations, the attackers went after the company's network providers, until the data flow reached its peak.
In the process, several providers’ networks became congested, affecting Web access for hundreds of millions of users.
The assailants used a Network Time Protocol (NTP) reflection attack, an increasingly popular approach among hackers.
The protocol—one of the Internet's oldest—is used to cope with network latency by synchronizing the time settings on computers across the Internet. The widely used NTP was first implemented about 35 years ago.
Reflection attacks make fake synchronization requests to NTP servers, which then transmit a flood of replies to targeted sites.
Security vendor Arbor Networks, in its ninth annual Worldwide Infrastructure Security Report, said huge DoS attacks are becoming bigger and more common, and will continue to do so.
Using NTP to launch DDoS attacks is a new approach, first reported by security vendor Symantec in December 2013.
Since then, these attacks have become a favorite of hackers, particularly as users have better secured their systems against previous popular types of assaults.
Reflection attacks took down several major gaming sites in January 2014. And several other large NTP DDoS attacks—with data rates of between 20 and 80 gigabits per second—were launched at the same time that hackers hit CloudFlare.
In these attacks, hackers work via a network that doesn't offer protection against IP address spoofing. They then generate large volumes of packets that spoof their intended target's address and send them to older NTP servers that still work with the monlist command.
Monlist requires severs to return a list of 600 IP addresses that most recently accessed the servers to senders of NTP requests. This makes the response up to 200 times larger than the original monlist request and thereby magnifies the NTP DDoS attack's effect.
Security experts recommend that users avoid working with NTP servers that support monlist and networks that allow IP address spoofing.
What Happened to the IPv4 Address Shortage?
For years, engineers have warned that the number of available IPv4 Internet addresses is rapidly decreasing and that businesses and providers should adopt IPv6, which offers many more addresses for the rapidly growing number of connected individuals, organizations, and devices.
Indeed, the Internet Assigned Numbers Authority allocated the last blocks of IPv4 addresses to the five regional registries—which distribute Internet identifiers to ISPs and individual organizations—in early 2011.
Tokyo Institute of Technology researchers have developed a way to cool their Tsubame KFC supercomputer by immersing it in mineral oil.
However, IPv6 adoption has been slow. For example, Google reports that as of 10 March 2014, a maximum of only 3.16 percent of visitors to its site accessed the Internet via IPv6.
The RIPE (Réseaux IP Européens, or European IP Networks) Network Coordination Center, a registry that serves Europe and the Middle East, regularly samples Internet communications worldwide. RIPE found that on 28 February 2014—the last date for which it has posted data—only 17.45 percent of networks worldwide announced an IPv6 prefix.
Despite dire predictions, users can still acquire IPv4 addresses.
In 1981, IPv4 was released, which has 32-bit addresses and thus enables 232 (about 4.3 billion) Internet addresses.
In 1995, the Internet Engineering Task Force (IETF) released IPv6, which has 128-bit addresses and thus enables 2128 addresses.
Because billions of people and devices now connect to the Internet, with the number growing every day, experts have warned that there soon won't be enough IPv4 addresses to handle the load. They said the Internet will have to switch to IPv6 as soon as possible.
However, Internet organizations and companies have figured out ways to make the most of the available IPv4 addresses.
For example, the global Internet registries still have some addresses left for organizations to use.
And some registries are acquiring unused addresses from various sources, including defunct ISPs.
Many providers use network address translation, which links some of their available private IPv4 addresses to public IP addresses that they can allow companies and others to use temporarily as necessary. This enables many users to share the small groups of public addresses that are available.
Carriers also buy IP addresses from other carriers.
Nonetheless, experts say, these approaches will only last so long and eventually, we really will run out of IPv4 addresses.
Cooling Supercomputers Cheaply by Dunking Them in Liquid
One of the major ongoing expenses with supercomputers involves the air conditioning necessary to cool the massive, heat-producing systems.
Keeping them from overheating can require tens of millions of dollars in electricity annually.
Now, vendors have come up with a new approach to make the process less expensive: immersing supercomputers in liquids.
This not only reduces the amount of electricity consumed but also decreases the amount of required air conditioning and air filtering equipment.
This is important because supercomputer use is becoming more widespread, in part because of the increasing amount of data that even smaller organizations are collecting.
Scientists at the Tokyo Institute of Technology have immersed their Tsubame KFC supercomputer in mineral oil, using technology developed by the US's Green Revolution Cooling. The Green500 list of energy-efficient supercomputers, compiled by Virginia Tech computer scientists, named the machine the most energy efficient of its kind.
Startup Iceotope has built supercomputers and servers that it cools by submerging them in 3M's Novec fire-suppression fluid, a liquid fluoroplastic.
Hong Kong's Allied Control has developed immersion-cooling systems that also use 3M's Novec.
Unlike water, mineral oil and liquid fluoroplastics aren't conductive and thus won't cause electrical problems. To avoid phtysical damage, some parts are sealed.
Supercomputer makers have long used water to cool their machines, but only by running it inside pipes that pass through their systems.
Cray used immersion cooling for one of its supercomputers in the 1980s, but didn't pursue the approach because of high costs and concern about the environmental effects of the era's coolants.
Report: Java Is Now the Favorite Malware Target
A new security report says the Java platform is now malware developers’ favorite target.
About half of all malware attacks observed in December 2013 exploited Java, according to the IBM X-Force Threat Intelligence Quarterly Report for the first quarter of 2014.
The next most popular target was Adobe Reader, which was involved in 22 percent of observed incidents.
In the past, software products from Adobe and Microsoft were the most popular malware targets. However, these companies have tightened up their applications’ security. Experts say Oracle has not done the same with Java.
The IBM report explained that Java is a popular malware target because it is widely used and has many vulnerabilities. The researchers noted that the number of Java bugs has increased in recent years and tripled in 2013 over 2012. As this has occurred, the number of Java exploits has risen and many have been added to online hacker toolkits.
Moreover, experts say, Java is an easy target because it runs within its own virtual environment. Hackers have to break only the platform's security—not that of the host system—to cause problems.
Although some experts recommend against using Java, many organizations continue to do so because it has powerful cross-platform capabilities and is required to run many important applications.
In these cases, the IBM report says, organizations should use only known, trusted Java files.
Headset Beams Video Directly into Wearers’ Eyes
A startup plans to release a headset later this year that will beam movies, game sequences, and other types of video directly into users’ eyes, rather than onto a screen, thereby creating a highly immersive viewing experience.
Users could connect Avegant's Glyph headset to their smartphone, tablet, laptop, TV, game console, or other machine. They could then play video on one of the devices but watch it via Glyph.
The product looks like a set of headphones. When wearers want to view video, they slide the band that normally goes over the head in front of their eyes. The band contains 2 million microscopic mirrors that reflect video images onto users’ retinas.
Avegant showed Glyph at this year's International Consumer Electronics Show in Las Vegas. The company has also made the headset available to developers so that they can find new applications for it.
Glyph, which will cost $499, has a battery life of about three hours, according to Avegant.
The company launched a Kickstarter crowdfunding campaign for the headset in January, hit its goal of raising $250,000 in only four hours, and has now collected pledges of close to $1 million.
Users Control Ear-Mounted Computer with Facial Movements
Hiroshima City University researchers have developed a prototype tiny computer that a user can wear in one ear and control with eyeblinks, tongue clicks, and other facial movements.
The Glyph headset projects video into users’ eyes, rather than onto a screen.
Users can place the prototype Earclip-type Wearable PC in an ear and control it with facial motions.
This could help people who can't use their hands because they are driving, disabled, or frail.
The 17-gram device—called the Earclip-type Wearable PC—has a microchip and storage, and can download software. Infrared sensors monitor ear movements that occur along with various eye and mouth motions. Thus, users could connect the system to a device—such as an iPod—and control it by, for example, sticking out their tongue or arching an eyebrow.
The Earclip-type Wearable PC also has Bluetooth; GPS; a compass; a gyrosensor, which senses angular velocity; a barometer; a speaker; and a microphone.
Other sensors could be installed to monitor a wearer's vital signs. If the system finds a health problem, it could notify family members.
An accelerometer could identify if an older user falls, which would prompt the computer to call either relatives or an ambulance.
The researchers are still testing their device. They hope to have it ready in time for the Christmas 2015 shopping season and to have it widely available commercially by early 2016.
Application Makes Cloud Operations More Efficient
A key challenge for large cloud-computing systems is the efficient distribution of workloads among servers.
To address this problem, Stanford University associate professor Christos Kozyrakis and doctoral candidate Christina Delimitrou have developed Quasar, a software system that makes cloud systems up to three times more efficient than in the past.
According to the scientists, this approach could enable cloud systems’ datacenters to run more optimally and closer to their full capacities, thereby consuming less energy.
Experts say that increasing efficiency will be essential for cloud computing to grow.
“Today, datacenters are managed by a reservation system,” said Kozyrakis. “Application developers estimate what resources they will need and reserve server capacity.” However, users—either deliberately or accidentally—frequently overestimate the resources they need, which leads to inefficiency.
“We want to switch from reservation-based cluster management to a performance-based allocation of datacenter resources,” Kozyrakis said.
Quasar does this by first identifying the level of performance that an application requires.
The system's database contains information about how various types of applications have performed on different kinds of servers in the past. It uses this information to determine which specific datacenter servers should be used to effectively run various programs and how best to assign multiple applications to individual servers.
The scientists say they are considering either forming a company to commercialize Quasar or giving it to an open source organization. They note that several large technology companies have expressed interest in the software.
New Technology Lets Users Borrow Others’ Wi-Fi Bandwidth
A technology developed by Spanish telecommunications provider Telefónica lets Wi-Fi users who don't have enough bandwidth borrow unused capacity from nearby networks.
This could, for example, help organizations and individuals download large amounts of multimedia or other data-heavy content that might otherwise overwhelm their current Wi-Fi systems.
The BeWifi project (www.bewifi.es), which Telefónica has worked on for six years, required the company to design and use routers running software that aggregates available bandwidth in an area to make it available for all local customers via a mesh network.
Telefónica said the pilot program it conducted with 1,000 customers in the region of Catalonia doubled their previous data rates.
Before the technology can be used commercially, it faces issues such as security, privacy, and the customers’ ability to opt out of the bandwidth-sharing program.