, Sun Microsystems
Pages: pp. 37-39
Peer-to-peer networking is a hot buzzword (or as hot as it can be these days) that has been sweeping through the computing industry over the past year or so. Fortune crowned P2P as one of the four technologies that will shape the Internet's future, and it continues to pop up in trade magazines and the mainstream press. Despite all the hype, there is a lot of substance to the topic as well.
While it is futile to try to give a precise mathematical definition of the wide range of computing technologies that support P2P systems, it is safe to say that they adopt a network-based computing style that neither excludes nor inherently depends on centralized control points. In an instance of the idea that "the network is the computer," P2P network nodes (computers, people, and so on) relate to each other "side by side" — juxtaposed physically and logically — within a global computing arena. In this sense, P2P is about awareness, of self and surroundings. It emphasizes the ability to reach out, discover, and connect with others, regardless of whether a prior relationship exists. P2P does not dictate predetermined or particular types of relationships that fall into well-known categories such as master-slave, client-server, or consumer-supplier.
A year ago, the editorial board of IEEE Internet Computing predicted that by early 2002 the emerging P2P wave would become mature enough for a theme issue. Since then, development in the technology arena and the marketplace has enhanced our understanding of what P2P has to offer and how it might be used.
To illustrate P2P's potential benefits, let us examine the Internet's three fundamental assets: information, bandwidth, and computing resources. All of these are now vastly underutilized, partly due to the traditional client-server computing model. Finding useful information in real time is increasingly difficult, for example, because no single search engine or portal can locate and catalog the ever-increasing amount of information on the Web in any timely fashion. What's more, miles of newly installed fiber provide additional bandwidth, but hot spots just get hotter and cold pipes remain unused when everyone goes to sites like Yahoo for content and eBay for auctions. Finally, new processors and storage devices continue to break speed and capacity records, supporting more powerful end devices throughout the network, but computation continues to accumulate around data centers, which must increase their workloads at a crippling pace. This trend puts immense pressure on space and power consumption.
P2P networking technologies can greatly improve the utilization of Internet resources. A distributed routing architecture can increase networking pipes' effective bandwidth, for example, by load-balancing traffic and thus reducing the peak load on networks. (Of course, simple-minded designs, such as vanilla Gnutella-style systems that use flood routing, can also add a large amount of unnecessary traffic.) Still, in addition to improving performance in information discovery, content delivery, and information processing, P2P-style computing could enhance the reliability and fault tolerance of the global computing system. For example, a P2P e-mail delivery system can send messages directly to the receiving peer if it is present and reachable. This reduces dependency on mail servers, which tend to be heavily loaded. Moreover, by splicing an e-mail message under a threshold scheme and then sending the message slices via different network paths, P2P e-mail systems can prevent casual eavesdroppers from intercepting the message.
Whether P2P becomes a lasting phenomenon will depend on whether interesting applications emerge and become widely adopted. Many people — especially those who have spent years in distributed computing research — feel that the technical issues the P2P community is facing are nothing new (naming, discovery, security, and so on). Recent technological advances have changed the context of the discussion significantly, however, by breaking down many of the barriers to applying P2P ideas.
A short list of possible P2P applications includes:
Despite some negative news commentary on things like P2P's unproven commercial viability or the pending demise of certain P2P startups, there is evidence of strong interest in P2P technologies. Both Gnutella and Freenet have attracted significant numbers of users, for example, as well as developers who are working to improve the original designs. As a more recent example, more than 7,000 people have registered to participate in Sun's JXTA project (http://www.jxta.org/) since its April 2001 public launch. The site now includes 54 separate development projects, and users have made close to 300,000 downloads.
This issue of IEEE Internet Computing includes five articles that explore a variety of aspects on peer-to-peer networking. The first article, "Protecting Free Expression Online with Freenet," by Ian Clarke et al., describes the scalability and fault tolerance of one of the most frequently cited P2P systems in deployment. Freenet provides strong authentication and file management capabilities, but it defeats efforts to monitor its activities by making it very difficult to discover the origin and destination of files passing through the network.
In "Mapping the Gnutella Network," Matei Ripeanu et al. describe another well-publicized P2P system. Gnutella first shot to public attention as the replacement for the Napster file-swapping service. This article reports the authors' examination of the topology of real-world Gnutella networks. It suggests design changes that could greatly improve the system's scalability by better matching the underlying Internet topology.
In "Improving Data Access in P2P Systems," Karl Aberer et al. present the Gridella system, which is modeled after and compatible with Gnutella. Indeed, the new self-organizing system aims to replace Gnutella in the long run by amending several shortcomings in the original system's design.
The article, "Distributed Search in Peer-to-Peer Networks," by Steve Waterhouse et al., outlines the inner workings of a search technology based on the JXTA platform. Unlike popular search engines that target static content, JXTA search can go beyond the Web and into databases by distributing queries and collecting replies in a P2P fashion.
Finally, Rainer Lienhart et al. outline an architecture for P2P multimedia applications and services in the article, "Improving Media Services on P2P Networks." The authors propose a resource management and adaptation framework that enhances quality of service, and report their experience in building the architecture and several showcase applications.
Together, these five articles provide an up-to-date look at P2P systems, both at the infrastructure and the service and application levels. Obviously, much remains to be done before P2P establishes itself as a lasting force. System monitoring, remote peer control, usage metering, and accounting methods are just a few of the areas that need further research, but we hope, as a Chinese proverb goes, that this theme issue is "the brick that calls out the jade."
We received a larger than usual number of submissions for this issue, many of which were of high quality, but space constraints dictated that we accept only a few. I would like to thank all who submitted their works, as well as the hard-working reviewers who gave of their precious time to help maintain the high quality of IC articles.