The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - March/April (2005 vol.9)
pp: 6-9
Published by the IEEE Computer Society
ABSTRACT
Although some have championed the fundamentally disruptive architecture of IPv6, others have taken the more short-term, pragmatic, and incremental approach of increasing the number of network connections via NAT over IPv4. However, as the issue's subplots emerge, it's becoming clear that this argument is one element in the larger problem of deploying new applications and new architectures.




The debate over whether network address translation (NAT) is beneficial for the Internet or a base heresy destroying its end-to-end principle has read like a sine wave over the years, with periodic peaks of interest followed by relative calm.
As US broadband connections have gathered critical momentum, however, that debate might finally be reaching a constant, perceivable level among a technology-savvy cohort. The critical question facing the industry is whether it's time for nontechnologists to begin voicing their own opinions and needs, or perhaps face a network architecture that will for years present complicated interfaces and stifle innovation.
The orthodox view of the debate is that it has broken down into essentially two camps. The first is those who have championed the fundamentally disruptive architecture of IPv6, which promises greater address space with which to support more devices with fewer intermediary proxies. The second is those who have taken the more short-term, pragmatic, and incremental approach of increasing the number of network connections via NAT over IPv4; this approach provides LANs with one public IP address and numerous internal addresses that are unrecognizable to the network at large. However, as the issue's subplots emerge, these orthodoxies are being exposed as just one element in the larger problem of deploying new applications and new architectures.
Achieving Standardization
Early adopters of technologies long promised to lower costs and improve communication are already discovering that the world isn't quite ready to plug and play. To compound matters, little economic incentive remains for last-mile ISPs to alter their current offerings to deliver new services, greater bandwidth, or numerous IP addresses to customers. This, despite the fact that doing so would theoretically enable a wide range of new applications throughout the network, from entrepreneurial videoconferencing sites for small offices with multiple workers to appliances that could signal for service calls before they break down.
The very area in which the most revolutionary deployments of communications applications could occur — at the network edge for consumers and remote workers — is caught in an inertial limbo in which these technologies' likely users are either ignorant of the dilemma or unable to compel vendors and service providers to develop a standardized architecture that won't cause economic mayhem.
David Passmore, research director at the Burton Group ( www.burtongroup.com) says the group's technology analysts are typical of early adopters. Most work from home and have taken to technologies such as VoIP telephony and Session Initiation Protocol (SIP)-enabled videoconferencing. However, Passmore says, the lack of standardized ways to deal with how NATs hide devices' addresses within each person's LAN has become problematic.
"People will get really frustrated by the fact that all it takes is one end to have an incompatible or poorly configured NAT router, and it blocks the SIP-based transmission. It shouldn't be that hard. It should just work, and it's going to have to just work before a lot of this stuff catches on. But we're still in the early days, so the pain hasn't reached critical mass yet."
In a recent study for Burton Group clients, Passmore criticized the way the IETF has handled NATs' technologically anarchistic proliferation.
"When it comes to network address translation, the IETF, the key Internet standards group, blew it," he wrote, citing the IETF's longstanding institutional view that NAT broke the end-to-end principle and therefore shouldn't be encouraged.
The IETF hasn't completely ignored NAT. Several working groups have contributed Internet drafts and requests for comment to the standards body, including RFC 3235, "Network Address Translator (NAT)-Friendly Application Design Guidelines," which originated from the NAT WG in 2002, and RFC 3489, "STUN — Simple Traversal of User Diagram Protocol Through Network Address Translators," by the Middlebox Communication (which succeeded the NAT WG) in 2003.
Passmore contends that these efforts address the problem, but not completely enough to become an across-the-board solution easily adopted on an industry-wide scale. For example, he writes that RFC 3235 offers guidance to applications developers but doesn't address standardization of NAT operations.
And although Simple Traversal of UDP through NATs (STUN) might be a fine solution in a small office–home office setting, the protocol won't work with the symmetric NAT routers deployed by many large enterprises, which would necessitate yet another workaround.
However, longtime IETF participant and former Internet Architecture Board chairman John Klensin contends that even if the organization had standardized NAT traversal, the market probably would have followed its own path with regard to the technology.
"We could have developed an IETF standard for how you deal with a NAT and abandoned the end-to-end principle," Klensin says, "and perhaps a case can be made either way. But all the evidence suggests that had we developed such a standard, it would have been ignored. The reason these per-protocol solutions get developed and turn these NATs into application-layer gateways — a far more serious problem than NATs themselves, in some respects — is because the applications are different, and they need different kinds of facilities. So in that sense, I don't think there's any problem."
Instead, Klensin says, the industry has shortchanged the typical end user by emphasizing NAT networks in last-mile configurations and by giving those users little or no help in expanding those networks' capabilities beyond passive client-only chores such as Web surfing and emailing.
"There is a problem in the autoconfiguration space in small network configuration by unskilled operators that the IETF has not managed to solve," Klensin says. "Whether that's because it's unsolvable or [because] the IETF has been too lazy or irresponsible, the reality is it's unsolved, and IETF participants are being a little bit disingenuous in saying we should get rid of NATs [without] having some kind of solution to that set of problems." Klensin says the IETF is being even more disingenuous in claiming that installing IPv6 will make NATs go away because the equipment doesn't yet exist to make that happen.
TUNNELS AND PROXIES AND GATEWAYS, OH MY
Last-mile router vendors have developed versatile products to work around address-transparency issues. Most NAT routers include options for manual configuration that will let IP telephony or online gaming applications function. This is useful, but it can be problematic if an application is trying to contact a device behind a NAT without being approved beforehand. Many also feature stateful firewalls that can classify which packets attempting to enter the LAN are legitimate. In many ways, novices and experts alike consider such NAT firewalls a key first line of defense against viruses and worms.
"If you use a NAT firewall, you get certain types of protection that are important to have," says IETF Internet area director and former IPv6 working group chair, Margaret Wasserman. She adds, however, that building a similar firewall that doesn't need to translate addresses is "trivially easy." Furthermore, IPv6 users' ability to authorize which addresses can be reached and which should stay inaccessible is an improvement over NAT, which translates everything, forcing intermediary proxy devices or services to complete connections such as VoIP, SIP, and instant messaging.
"The issue with NAT in those cases is not that you can't do those things," she says. "It's that it forces a third-party service model into what should be peer-to-peer communication."
Wasserman, Klensin, and Passmore assert that last-mile ISPs actually prefer to issue customers NAT boxes and single dynamic external IP addresses. This way, they preserve addresses and, perhaps not coincidentally, keep traffic levels lower than what they might be in a fully enabled synchronous network.
"Their whole network is designed around two things," Passmore says. "It's engineered for oversubscription, and it's engineered for asymmetrical traffic flows, especially DSL. So, given those two things, ISPs have to provide a financial disincentive for people to ever do any of their own hosting from their homes. That's why they make available so many megabytes of storage. But they definitely don't want you to have the ability to put up your own email server, or whatever server you might want, from your home. The second thing that continues to be a battle is the providers trying to differentiate between residential versus business customers."
To reinforce their attempts to limit the amount of client traffic in last-mile settings, Passmore says, ISPs have taken steps such as blocking ports and closing down connections on which they detect the IP Security protocol; several large ISPs have unilaterally blocked port 25 for email, for example, figuring that anybody using it is running an email server. Most also charge significantly more for static IP addresses than dynamically assigned ones. This policy can make it extremely difficult for a traveling worker with a laptop and a static IP address to access a corporate virtual private network through a hotel hotspot or a customer's home or office with a NAT that assigns addresses dynamically and doesn't recognize the static one.
Klensin says that in an effort to use the network in the way that meets their needs, rather than the ISPs' bottom line, entrepreneurs are devising elaborate workarounds that further complicate network deployments.
"The oddity is, we're developing new ways of doing things in response to the situation that are unhealthy," he says. "They're again raising costs in terms of aggravation and required technical skill and dollars to the end users, and that's probably not a societal good; they're creating much more inertia behind new applications and new ways of doing things, and that's definitely not a societal good. But whether the problem is the NATs or a whole series of strange ways of doing things is an open question."
The Path Not Taken
Stanford University professor David Cheriton might rightfully be considered a networking pioneer with a knack for technologies that will eventually pay off. Cheriton was one of Google's first investors, for example, and he also sold a startup he began with Sun Microsystems cofounder Andy Bechtolsheim to Cisco. Cheriton feels that the standards community has made a mistake in pursuing IPv6 over NAT, and that the ensuing multitude of ad hoc workarounds greatly endangers not only communication but also security.
"I don't think the Internet is doomed or anything like that, but what is disturbing is [that] it has become dramatically more complicated because of the accretion of ad hoc solutions, and that has created a situation where it's an obvious target for attack. I think most technologists would agree that one of the greatest difficulties in making the network secure from attack is just [its] complexity. We should be pursuing a strategy of trying to simplify things so you can understand the points of attack and tighten up the bolts. And we're headed, I think, in the wrong direction."
In 2000, Cheriton and one of his graduate students, Mark Gritter, detailed a NAT-based Internet architecture called Translating Relaying Internet Architecture Integrating Active Directories (Triad; http://citeseer.ist.psu.edu/cache/papers/cs/1888/http:zSzzSzwww.csd.uch.grzSz~markatoszSzpaperszSztriad.pdf/cheriton00triad.pdf). Triad bases all identification on DNS names rather than end-to-end IP addresses, provides end-to-end semantics with a name-based, transport-level pseudoheader, and uses a simple shim protocol atop IPv4 to extend addresses across IPv4 realms.
"You have to identify the endpoints in communications based on some identifier [that is] meaningful to the application — to the person at the character-string name level — not a bunch of stupid bits," Cheriton says. "The bits can be an optimization in the same sense [that] you don't dial my name into the phone system to get to my phone."
However, Cheriton believes that the IETF never gave Triad due diligence, and he's troubled that "the standards body turned left while the real world turned right" with ad hoc NAT.
"As far as I could tell, the attitude from the beginning of the Internet through to the IETF is [that] they make it just sort of an optional add-on, a convenience feature, but that doesn't make any sense to me at all. I feel I've experienced years of having papers rejected from research conferences, of being sort of ignored or snubbed by standards people. And I probably haven't worked as hard as I should to promote things, but there's clearly just a resentment that 'We like things the way they are; they've been very successful so far.'"
Conclusion
Although it might be tempting to compare Cheriton's technology to the Sony Betamax, Klensin says the deciding factor in VHS tape defeating Betamax wasn't technological superiority, but perceived value among consumers, who preferred the VHS format's extended recording capability.
"Betamax was a clearly better technology, but the public didn't care about superior delivery, they cared about that extra hour. Assuming a completely rational market, you have to work rather carefully to explain not only [the] network implications, but [the] manufacturing implications, support implications, and end-user implications of Dave's idea to the guy whose notion of the Internet experience is half a step up from his notion of the television experience — and as far as I know, nobody's ever done that analysis."
Whether it's time to do that analysis and draw the market at large into the debate might be a pertinent discussion in itself.
Greg Goth is a freelance technology writer based in Connecticut.
16 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool