Computing Now Exclusive Content — July 2011

News Archive

July 2012

Gig.U Project Aims for an Ultrafast US Internet

June 2012

Bringing Location and Navigation Technology Indoors

May 2012

Plans Under Way for Roaming between Cellular and Wi-Fi Networks

Encryption System Flaw Threatens Internet Security

April 2012

For Business Intelligence, the Trend Is Location, Location, Location

Corpus Linguistics Keep Up-to-Date with Language

March 2012

Are Tomorrow's Firewalls Finally Here Today?

February 2012

Spatial Humanities Brings History to Life

December 2011

Could Hackers Take Your Car for a Ride?

November 2011

What to Do about Supercookies?

October 2011

Lights, Camera, Virtual Moviemaking

September 2011

Revolutionizing Wall Street with News Analytics

August 2011

Growing Network-Encryption Use Puts Systems at Risk

New Project Could Promote Semantic Web

July 2011

FBI Employs New Botnet Eradication Tactics

Google and Twitter "Like" Social Indexing

June 2011

Computing Commodities Market in the Cloud

May 2011

Intel Chips Step up to 3D

Apple Programming Error Raises Privacy Concerns

Thunderbolt Promises Lightning Speed

April 2011

Industrial Control Systems Face More Security Challenges

Microsoft Effort Takes Down Massive Botnet

March 2011

IP Addresses Getting Security Upgrade

February 2011

Studios Agree on DRM Infrastructure

January 2011

New Web Protocol Promises to Reduce Browser Latency

To Be or NAT to Be?

December 2010

Intel Gets inside the Helmet

Tuning Body-to-Body Networks with RF Modeling

November 2010

New Wi-Fi Spec Simplifies Connectivity

Expanded Top-Level Domains Could Spur Internet Real Estate Boom

October 2010

New Weapon in War on Botnets

September 2010

Content-Centered Internet Architecture Gets a Boost

Gesturing Going Mainstream

August 2010

Is Context-Aware Computing Ready for the Limelight?

Flexible Routing in the Cloud

Signal Congestion Rejuvenates Interest in Cell Paging-Channel Protocol

July 2010

New Protocol Improves Interaction among Networked Devices and Applications

Security for Domain Name System Takes a Big Step Forward

The ROADM to Smarter Optical Networking

Distributed Cache Goes Mainstream

June 2010

New Application Protects Mobile-Phone Passwords

WiGig Alliance Reveals Ultrafast Wireless Specification

Cognitive Radio Adds Intelligence to Wireless Technology

May 2010

New Product Uses Light Connections in Blade Server

April 2010

Browser Fingerprints Threaten Privacy

New Animation Technique Uses Motion Frequencies to Shake Trees

March 2010

Researchers Take Promising Approach to Chemical Computing

Screen-Capture Programming: What You See is What You Script

Research Project Sends Data Wirelessly at High Speeds via Light

February 2010

Faster Testing for Complex Software Systems

IEEE 802.1Qbg/h to Simplify Data Center Virtual LAN Management

Distributed Data-Analysis Approach Gains Popularity

Twitter Tweak Helps Haiti Relief Effort

January 2010

2010 Rings in Some Y2K-like Problems

Infrastructure Sensors Improve Home Monitoring

Internet Search Takes a Semantic Turn

December 2009

Phase-Change Memory Technology Moves toward Mass Production

IBM Crowdsources Translation Software

Digital Ants Promise New Security Paradigm

November 2009

Program Uses Mobile Technology to Help with Crises

More Cores Keep Power Down

White-Space Networking Goes Live

Mobile Web 2.0 Experiences Growing Pains

October 2009

More Spectrum Sought for Body Sensor Networks

Optics for Universal I/O and Speed

High-Performance Computing Adds Virtualization to the Mix

ICANN Accountability Goes Multinational

RFID Tags Chat Their Way to Energy Efficiency

September 2009

Delay-Tolerant Networks in Your Pocket

Flash Cookies Stir Privacy Concerns

Addressing the Challenge of Cloud-Computing Interoperability

Ephemeralizing the Web

August 2009

Bluetooth Speeds Up

Grids Get Closer

DCN Gets Ready for Production

The Sims Meet Science

Sexy Space Threat Comes to Mobile Phones

July 2009

WiGig Alliance Makes Push for HD Specification

New Dilemnas, Same Principles:
Changing Landscape Requires IT Ethics to Go Mainstream

Synthetic DNS Stirs Controversy:
Why Breaking Is a Good Thing

New Approach Fights Microchip Piracy

Technique Makes Strong Encryption Easier to Use

New Adobe Flash Streams Internet Directly to TVs

June 2009

Aging Satellites Spark GPS Concerns

The Changing World of Outsourcing

North American CS Enrollment Rises for First Time in Seven Years

Materials Breakthrough Could Eliminate Bootups

April 2009

Trusted Computing Shapes Self-Encrypting Drives

March 2009

Google, Publishers to Try New Advertising Methods

Siftables Offer New Interaction Model for Serious Games

Hulu Boxed In by Media Conglomerates

February 2009

Chips on Verge of Reaching 32 nm Nodes

Hathaway to Lead Cybersecurity Review

A Match Made in Heaven: Gaming Enters the Cloud

January 2009

Government Support Could Spell Big Year for Open Source

25 Reasons For Better Programming

Web Guide Turns Playstation 3 Consoles into Supercomputing Cluster

Flagbearers for Technology: Contemporary Techniques Showcase US Artifact and European Treasures

December 2008

.Tel TLD Debuts As New Way to Network

Science Exchange

November 2008

The Future is Reconfigurable

Google and Twitter “Like” Social Indexing

by George Lawton

For the last decade, the dominant approach to finding information on the Web has been Google’s link-based approach. Google indexes material by using page-rank algorithms that use the links between pages. Pages with many links from other sites are rated as more important and, for example, placed at the top of search results.

People use Google not only to search for information but also to find products, books, and so on. Recently, both Google and Twitter have revealed new efforts to integrate preference information into these indexes in same manner as Facebook’s “like” button, which lets users indicate their preference by clicking on a special website button.

Professor Jon M. Kleinberg at Cornell University said, “The ‘like’ button is a channel that gets used when your actions by themselves are not a rich enough language for expressing your opinion. You get richer information to the extent that you can make this feedback part of the workflow of the site. When you combine these features with methods for personalizing the site, then people see some value in expressing their opinion.”

Facebook launched the “like” button on its public social networking site n April 2009. In April 2010, it merged this implementation with and replaced Facebook’s Fan feature for companies. By late April 2011, the “like” button had been added to over 250,000 sites. Both Google and Twitter recently released key advances in their respective social-indexing platforms, called “+1” and “Follow.”

Smaller companies are focused on extending social indexing into specific domains. For example has attracted 1.2 million users into creating the largest index of TV shows in the world. “The entertainment space is one of the most social and telling areas, when it comes to learning about people’s preferences,” said CEO Alex Iskold.

Proponents say that for some users and in some circumstances, this approach is more effective than one based on links. Network analysis tools, which help understand the significance of metadata, play a key role in social indexing. These tools are better at considering the opinions of friends when looking for restaurants, books, movies, websites, or TV shows than traditional analytic techniques, said Iskold.

The Evolution of Search

A main driver for social-indexing technology is the new kinds of analysis it makes possible over traditional approaches. For example, Kleinberg notes that people can get more useful information, when the site uses algorithms that incorporate feedback about people’s preferences.

Using preference metadata in search is the next major evolution of indexing systems, said Marc Smith, chief social scientist at the Connected Action consulting group, who also led the development of NodeXL, the world’s most popular network analysis tool. “I would argue that we’re about to move from the era of page-rank–based search to an era of people-rank–based search, where we’re not just looking for the links between pages, but starting to look at the links between people and new forms of links beyond. Text indexes created a higher value representation of content, while social indexing creates a higher level representation of collections of connections. It’s a natural progression.”

Smith said the need for search originated in the explosion in documents. The first Web search techniques treated all documents as independent islands. The field of search took a major step forward with Google’s introduction of the page-rank algorithm, which linked a Web site’s relevance to the number of other sites pointing to it.

Now the use of tie-relationship types is opening a new era of more precise search. These tie-relationship types include liking, linking, rating, reviewing, commenting, re-tweeting, replying, following, friending, and contacting.

Social-Indexing Technology Drivers

The growth of social-indexing technologies is being driven by improvements in processing algorithms, display techniques, and parallel processing techniques.

The network analysis required for social-indexing services is significant. Smith said that a basic desktop computer can process network microscale graphs of less than 1,000 nodes. Larger computations on mesoscale graphs of 10,000–1,000,000 nodes require dedicated hardware, and megascale graphs—beyond one million nodes—require a large data center or cloud service provider.

Smith said desktops offer a lot of opportunity for parallelism. NVidia’s Tesla represents a new class of GPU cards that enable mesoscale analysis. These cards come with up to 480 GPUs per card, and a single computer chassis can hold up to four cards.

A second shift has been the evolution of MapReduce tools such as Hadoop. Iskold said these tools let crunch complex algorithms in a matter of minutes by distributing across many virtual machines in the cloud.

Until recently, the IT world shunned the analytic techniques used in network analysis because of the higher redundancy and lack of data integrity, said Iskold. “Replication was a big ‘No’ in Wall Street, because data integrity was so important. But what’s important now is accurately representing the preference of users.”

Network analysis techniques have also improved, said Kleinberg. For example, researchers are trying to identify useful ways of scaling a network’s size up or down. In many cases, a developer can scale a complex network graph down to speed up analysis, but in other cases, this process reduces accuracy. More research is required to understand the best practices around scaling.

Another set of challenges lies in identifying the most efficient ways of rendering complex network graphs for analysis, said Cody Dunne, a doctoral student at the University of Maryland. The display screen itself places fundamental limits on how many nodes a user can visualize. Dunne said the biggest, most obvious challenges network analysis tools face are nodes overlapping, edges crossing unnecessarily, and edges tunneling underneath nodes without connecting to them. “There are automated layout algorithms and post-processing approaches for reducing these somewhat,” he said, “but they’re not usually implemented by analysis tools.”

Dunne sees a need to develop readability metrics for these kinds of graphs. “As social network analysis and graph drawing in general become more mainstream,” he explained, “it’s important to provide new entrants guidelines for effective graph-drawing creation. Without them, the graph drawings users produce can be unintelligible or even misleading.”

The first generation of tools is limited to specifying preferences, said Kleinberg. He expects social indexing promises to open up information classification in two dimensions: respect and agreement. He said these two dimensions often get bundled together in current preference implementations. New approaches such as the rating system, which lets participants rate other participants as well as products, could tease apart information about users’ preferences from their level of respect for others.

As Preferences Proliferate...

Experts believe that the proliferation of social indexes will also attract the attention of spammers and others who will try to game the system. “With any system,” Kleinberg warned, “you have to design the features and analysis knowing that spammers and well-intentioned people will take advantage of it in ways you did not expect. The popularity of these systems will incentivize people to adapt their behavior and do well by these metrics.”

There are also concerns about the number of different preference systems the Internet can practically support. Smith said, “I don't think there will be 20 buttons on every site, although there currently are on many.”

Smith believes that the proliferation of preference buttons could evolve in one of two ways. The number of preference indexes could continue to grow and be balanced and managed by a layer of preference-management tools that can add preferences to or get data from multiple social indexes. The other possibility is a competitive winnowing process that forces the convergence to two or three centralized repositories managed by companies like Google, Facebook, and Amazon.

George Lawton is a freelance researcher based in Guerneville, CA, Contact him at