Fast New Version of I/O Interface Introduced
Just when it looked like USB had won the I/O interface war, a trade organization has announced plans for a new, faster version of a competing technology that provides ways to connect peripherals to systems.
The Serial ATA International Organization recently said it will complete work on its SATA Universal Storage Module approach, geared toward consumer-storage applications, later this year. SATA-IO says a single USM unit could simultaneously and quickly provide content to multiple devices, such as a TV, laptop, and PC.
The organization says the technology will offer data rates up to 6 gigabits per second, with support for earlier SATA versions that offer speeds of 1.5 or 3 Gbps.
USB has become the dominant I/O interface and is used in many settings. The latest version, USB 3.0—also known as SuperSpeed USB—began appearing in consumer products in late 2009. It provides data rates up to 5 gigabits per second, has been widely adopted, and is backward-compatible with earlier USB versions.
Man-in-the-Browser Attacks Afflict Corporate Networks
Cybercriminals are beginning to use a new approach to launch attacks against increasingly well-defended corporate networks. With these networks protected by firewalls, antivirus software, and other forms of security, hackers are increasingly turning to what they now see as the most vulnerable point: the browser.
They are thus burrowing into the networks via man-in-the-browser attacks, which start with a website containing malicious software. When visitors arrive, the malware is installed on the browser, giving hackers control of it.
Attackers could steal login credentials, account numbers, financial information, or other personal material. They could also modify pages, content, or transaction data presented to the user. During a financial transaction, the hackers could use the malware to surreptitiously send transfer or payment requests. The customer wouldn't realize there is a problem until the next account statement arrives.
China Develops Technology for Ultrafast Supercomputers
China recently provided information on a microprocessor that its Institute of Computing Technology has designed and that is slated to power a multipetaflops supercomputer in the not-too-distant future.
China's Dawning Information Industry will use about 3,000 Godson-3B multicore multiprocessors in a high-performance 300-teraflops system it plans to release this summer. The company plans a future version that will deliver at least 1 petaflops.
The eight-core, 1.05-GHz Godson-3B chip, built with 65-nm feature sizes, performs 128 gigaflops and is already being manufactured.
Planned for release in two years, the 2-GHz Godson-3C will have 16 cores and 28-nm feature sizes, and will offer 512 Gflops.
The Chinese have become important players in the supercomputer industry. On the most recent list of the top 500 high-performance computers—published in November 2010 by several researchers in the US and Germany—China has the fastest and third-fastest systems.
The most powerful supercomputer is the Tianhe-1A, developed by the Chinese National University of Defense Technology and located at the National Supercomputing Center in Tianjin. The third-fastest is the Nebulae, developed by Dawning and located at the National Supercomputing Center in Shenzhen.
Amazon's Massive Cloud Hosting Site Crashes
Amazon Web Services—a huge, cloud-based Web-hosting system—crashed recently, causing problems for the many large online operations it serves.
Industry observers say the temporary crash raises questions about the dependability of AWS and perhaps even the cloud itself. Other large, cloud-based Web services have also experienced temporary failures.
Many organizations—including FourSquare, Hootsuite, the New York Times, Quora, and Reddit—pay to run at least some of their websites on Amazon's service, which works with the company's Elastic Compute Cloud.
EC2 is a distributed system of servers—located in five global regions, with additional subsystems in each region—designed to provide flexible, scalable, and redundant Web services.
Amazon has touted its system's reliability. However, the recent crash pushed some hosted websites offline for many hours. Amazon says the crash started in a subsystem within its US East Region that became unable to service read and write operations.
According to the company, the following occurred:
• Amazon was changing system configurations to upgrade the capacity of a primary network in the subsystem. Instead of shifting its traffic to a redundant router, the company mistakenly moved the communications to a lower-capacity network, which could not handle the volume.
• This left the primary and secondary networks unable to function, causing a loop of service requests that could not be satisfied, thereby overloading the system.
• Initially, additional attempts to resolve the issues caused them to spread, thereby affecting the redundancy designed to avoid system unavailability.
After fixing the problems, Amazon said it will audit its configuration process, change its procedures as necessary, and increase automation to prevent a recurrence of the issues that caused the crash.
Could a Proposed Wireless Network Harm GPS Services?
A proposed fast wireless network could overwhelm GPS signals in the US, interfering with important services such as police and air-traffic communications, as well as navigation devices, according to some industry observers.
The issue involves permission the US Federal Communications Commission (FCC) gave LightSquared to construct and implement a nationwide broadband network using frequencies close to those utilized by GPS.
GPS vendors say that strong signals from LightSquared's system could jam their transmissions. LightSquared and the FCC contend the proposed network would not cause these problems.
US officials say they won't let LightSquared switch on its network as scheduled later this year unless they feel confident that GPS systems will still function properly.
Earlier this year, the FCC approved LightSquared's network proposal, saying it would increase competition among wireless providers and thereby make US mobile services faster and less expensive.
The company plans to operate a hybrid network that would work with both Long Term Evolution (LTE) wireless and satellite technologies. Its services—which would be provided to the public by wireless carriers, not directly by LightSquared—would compete with those offered by companies such as AT&T and Verizon Wireless.
Various sources say interference from LightSquared's service could threaten, for example, automobile-navigation systems, public-safety communications networks, and a proposed US Federal Aviation Administration plan to use GPS to improve the nation's air-traffic-control operations. These sources include car-navigation equipment manufacturers, aviation-related organizations, police officials, and the US military.
In the past, GPS receivers have been able to filter out low-power signals in nearby frequencies, most of which have come from satellites. However, vendors say, a large ground-based system would cause interference issues.
GPS units could add filters for LightSquared signals, but some sources say that this approach might be very expensive.
LightSquared and the FCC say more testing will occur to determine whether the new system would cause interference problems. The company is slated to participate in the testing.
Study Shows Significant Downtime for Critical APIs
A study has shown that some of the public APIs that major online operations use to access various important third-party services experience high downtime levels, which could cause problems for these operations.
Website- and application-performance monitoring company WatchMouse checked the availability of 50 major cloud-based APIs every five minutes from 16 February to 17 March 2011. The testing included a simple API call and a check for a valid result. The lack of a proper or prompt response contributed to downtime ratings.
According to WatchMouse's website, "In accordance with industry standards, availability of [at least] 99.9 percent is regarded as good, while anything below 99 percent is regarded as poor. (99 percent uptime equals over 80 hours downtime per year, or about one business day per month.)"
Ten of the interfaces tested—five from Google and the Basecamp, Delicious update, eBay shopping, Quora, and SimpleGeo APIs—had 100 percent uptime.
The lowest score was 94.32 percent uptime, by the MySpace Open Search API. Others scoring below 99 percent were the Digg (98.66 percent), Gowalla (98.52 percent), GeoNames (97.47 percent), Eventful (97.20 percent), and Posterous (97.17 percent) APIs.
WatchMouse's own API showed 99.98 percent uptime.
New Organization to Develop Networking Standards
A group of technology companies has formed an organization to develop and manage standards based on a new approach called software-defined networking.
The Open Networking Foundation (ONF) says it will standardize SDN technologies—pioneered at Stanford University and the University of California, Berkeley—designed to make large and small networks programmable.
Proponents say SDN could yield flexible, secure networks that have fewer traffic issues and that could be less costly to construct and run.
The group says the ONF's activities could provide network owners with a standardized way to have flexible control over their operations.
ONF members include Broadcom, Cisco Systems, Dell, Deutsche Telekom, Ericsson, Facebook, Google, Hewlett-Packard, IBM, Juniper Networks, Microsoft, Verizon, VMware, and Yahoo.
Typically, most of the Internet's intelligence resides in the computers functioning as end points, not in routers or elsewhere within the network.
In cloud computing, though, information and applications are on network-based computers. For this system to function properly, SND proponents say, smarter networks would be necessary to, for example, orchestrate the behavior of the thousands of routers that would be involved.
This approach would enable, for instance, the configuration of networks to handle heavy traffic or the prioritization of certain types of data to achieve quality of service for time-sensitive traffic.
Proponents say SDN would permit these types of activities to happen by making previously proprietary hardware and software systems that control packet flow more open.
Several companies—including Cisco, Hewlett-Packard, and Juniper Networks—have produced prototype SDN systems.
Some industry observers say SDN appears to have wide industry support but still must prove itself in actual use and in the marketplace.