The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - Nov.-Dec. (2012 vol.16)
pp: 11-13
Published by the IEEE Computer Society
Charles Petrie , Stanford University
Oliver Spatscheck , AT&T Labs — Research
ABSTRACT
The Internet is based on a layered set of protocols — the most basic of which is TCP/IP — and the servers and routers that support them. This special issue addresses challenges to the Internet's basic protocols as our collective Internet usage increases in both magnitude and type and becomes increasingly crucial for our economy and culture. "Designing a Deployable Internet: The Locator/Identifier Separation Protocol" shows how to achieve better scalability for mobile devices by separating the location from the identification function of the IP addresses. "IPv6 Deployment and Spam Challenges" shows that IPv6 introduces new problems for fighting spam and denial-of-service attacks, and offers solutions. "All-Weather Transport Essentials" surveys current Internet protocols and principles for their revision to address known problems that can be expected to become more serious in the near future.
The Internet is based on a layered set of protocols — the most basic of which is TCP/IP — and the servers and routers that support them. As the Internet becomes increasingly crucial to our economy and culture and our collective usage increases in both magnitude and type, challenges are arising for the Internet's basic protocols.
The Internet has been "about to fail" since its commercialization — or at least so we've heard on a regular basis. The most famous of these announcements were Bob Metcalf's prediction that the Internet would suffer a "gigalapse," and his subsequent assurance that the Internet could never be wireless (see www.computer.org/portal/web/internet/extras/Bob-Metcalf). In this same 1997 interview with one of us (Charles Petrie), Metcalf noted that the Internet had several basic problems, most of which could be solved by the free market and service-based pricing. For instance, he suggested that the solution to email spam would be to charge for sending email, just as we charge for postal mail.
Although such predictions and "solutions" appear quaint today, other more recent predictions about major growth in the Internet appear to be well on their way to becoming true. For instance, Vint Cerf's September 2008 Google blog predicts a future that isn't yet here (see http://googleblog.blogspot.com/2008/09/next-internet.html), but we can easily imagine that hyperlinks will soon be in video, and that almost any of your appliances will be able to tell you where you left your glasses. In the face of such overwhelming change, we can easily see that the protocols underlying the Internet that were developed in the last century won't be sufficient, or at least not efficient, for the next.
Emergent Challenges
In fact, real challenges to the Internet's underlying technology have already arisen. Most importantly, the IPv4 address space ran out in 2011, forcing providers to change to the newer IPv6, which solved other problems in addition to those caused by the exploding address space, including the need for better control over quality of service and for larger packets.
These dramatic changes in Internet-enabled applications and services are widely recognized. Less so is the fact that dramatic changes have also occurred in the Internet's underlying building blocks and indeed its fundamental structure. In a recent IC article, one of us (Oliver Spatscheck) explored the impact of multiprotocol label-switching (MPLS) technology as well as the flattening of the Internet's autonomous system (AS) topology. 1 These are "under-the-hood" changes that have serious implications for the backbones, routers, and switches that drive our Internet services.
We can expect even greater transformations in how the Internet is used in the future, requiring significant changes in its underlying protocols. Given our experience with how long it took to adopt IPv6, it's clearly not too early to start considering the issues and alternatives.
One major challenge is the Internet of Things described in Cerf's blog, among other places (including Cerf's Backspace column in this issue of IC). Today, it's mainly humans who are connected to the Internet. Even though we did drive its vast expansion, this growth is clearly coming to an end in the first world, given that most humans are already highly connected. This might lead us to ask whether the growth and therefore the demand on the scale of Internet protocols, such as on routing protocols, is coming to an end. But this isn't the case — the next revolution has already started.
The Internet of Things, which envisions that everything that has power will connect to the Internet, will drive growth for years to come. It will demand not only semantically rich, high-level protocols and agents to make it usable for humans, but also basic underlying infrastructure protocols that can support the expected growth of mobile, connected devices, each of which might communicate infrequently but must be reachable at all times.
In This Issue
To address the aforementioned growth, the article "Designing a Deployable Internet: The Locator/Identifier Separation Protocol," by Damien Saucez, Luigi Iannone, Olivier Bonaventure, and Dino Farinacci, presents LISP, which aims to achieve better scalability in the routing domain, in particular for mobile devices. It does this by separating IP addresses' location from their identification function. Changing such a fundamental concept in the Internet is a challenging task. The article highlights this challenge and shows how LISP and other new protocols can overcome it by being incrementally deployable today. In addition to achieving its original goal, LISP can also provide scalability benefits for such diverse applications as IPv6 migration, network virtualization, virtual machine mobility, and device mobility, as described in the article.
As Metcalf noted in 1997, email spam has been with us since the Internet's commercialization. Providers and users have adopted numerous technologies to combat this plight. In their article "IPv6 Deployment and Spam Challenges," Hosnieh Rafiee, Martin von Löwis, and Christoph Meinel make the perhaps surprising point that revising the basic Internet protocol to address some serious problems, such as lack of address space, makes some of these spam-combating technologies ineffective.
The central issue is that IPv6 lets spammers use many temporary IP addresses, thus making blacklisting almost irrelevant. Content-based filtering might still work. Whitelisting should work as well as before, but IPv6 didn't solve the authentication issue, so spammers can still spoof whitelisted addresses. We can view this problem as more serious than simply email spam: this article points out that these problems apply also to bots and denial-of-service (DoS) attacks, which can bring down large and important parts of the Internet. Such attacks are a serious threat for the general Internet: although they exploit security weaknesses in only a few popular operating systems, they affect everyone when major services go down for hours. Today, Internet services are so critical to both consumers and businesses that such disruptions have a major economic impact. This article analyzes the issues and points toward some solutions for these serious problems.
Finally, in "All-Weather Transport Essentials," Christof Leng, Wesley W. Terpstra, Max Lehn, Robert Rehner, and Alejandro Buchmann survey current basic protocol alternatives and their relationship to several known problems. To a large extent, this is a survey that should be valuable to nonexperts interested in how Internet protocols affect how they use Internet functions. The authors use the most popular protocol, HTTP for the Web, as a common example, but they also examine many other important protocols and note their problems.
Beyond its survey aspects, this article offers two principles for revising existing protocols. First, the authors argue that implementation must be evolutionary, rather than revolutionary, and show how this is possible, as in the LISP article. This might seem obvious to some readers, but it's in contrast to some major research initiatives that aim to redesign protocols from scratch. The authors describe a set of new protocols that address a large group of problems, but they also explain how to implement these protocols gradually by tunneling over older ones.
As the article title suggests, the authors also posit the important protocol design principle that worst cases are exceptional, so protocols should be optimized for more common, good cases. Again, this is in contrast to conventional wisdom that protocols should be designed for the worst-case scenario. In fact, worst-case design is one of TCP/IP's strengths, but the authors describe how many current inefficiencies leading to congestion result from not following a more optimistic principle. They show how existing alternatives can be reconfigured into a more rational set of protocols that would be much more optimized for current and future Internet usage in which Internet "weather" is much different than it was earlier. This article surveys a very wide range of issues and protocols, and some of the ideas might be controversial among experts.
We hope and expect that the articles in this special issue will contribute to stimulating much needed discussion and action among those who will guide the Internet into the remainder of the 21st century.

Reference

Charles Petrie retired from Stanford University as a senior research scientist with the CS Logic Group. He is a guest professor at Karlsruhe University, Germany, and at the University of St. Gallen, Switzerland, for 2012. Petrie has a PhD in computer science from the University of Texas at Austin. He was a founding member of the technical staff of the MCC AI Lab, founding editor in chief of IEEE Internet Computing, founding executive director of the Stanford Networking Research Center, and founding chair of the Semantic Web Services Challenge. Contact him at petrie@stanford.edu; http://www-cdr.stanford.edu/%7Epetrie/bio.html.
Oliver Spatscheck is a lead member of technical staff at AT&T Labs — Research, where he received the AT&T Science and Technology Medal in 2007. His research interests include content distribution, network measurement, and cross-layer network optimizations. Spatscheck has a PhD in computer science from the University of Arizona. He has published more than 50 research articles, received 35 issued patents, and coauthored a book on Web caching and replication. Contact him at spatsch@research.att.com.
32 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool