The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - Jan.-Feb. (2013 vol.17)
pp: 3-6
Published by the IEEE Computer Society
Oliver Spatscheck , AT&T Labs—Research
ABSTRACT
Layering at the protocol stack and beyond has contributed hugely to the Internet's success, letting functionality be introduced basically overnight. However, it's increasingly hampering the reliability of the network it created.
Even though it should be self-evident to everyone at this point, it's difficult to avoid repeating the fact that the Internet has grown beyond belief during my lifetime. Taking this as given, I'd like to focus on some of the Internet's increasing growing pains. I don't propose a solution to the problems I highlight, but being aware of a problem is often the first step toward fixing it.
The problem I want to focus on is layering. Before diving deeper into the topic, let me first define what I mean by "layering." Traditionally, the term has been used to define protocol layers; for example, the OSI protocol stack. However, here I use a broader definition that includes the protocol stack as well as every functional module that provides a well-defined interface or service on the Internet. For example, using this definition, the interfaces in a service- oriented architecture define boundaries between layers, too.
Layering has clearly contributed hugely to the Internet's success by allowing functionality to be introduced basically overnight; however, increasingly, it's hampering the reliability of the network it created.
Driving Innovation
Let's first examine some positives. Layering started early in the Internet. The IP layer provides only basic, host-to-host, best-effort connectivity. Protocols such as UDP and TCP addressed some of these limitations pretty much from day one. It didn't stop there, though. Layers such as the Secure Sockets Layer (SSL), Transport Layer Security (TLS), and IPSEC provide security, FTP and HTTP provide remote object access, SOAP and RPC offer remote code execution, and the GPRS Tunneling Protocol (GTP) provides cellular network support. These are only a few of the protocols used on top of the IP layer; we shouldn't forget the layers below, such as Multiprotocol Label Switching (MPLS) and the Ethernet Protocol (ETH). I could easily fill my allotted space just listing protocols; however, the important part to take away from this section is that many layers exist, and the ease at which we can quickly add layers to address a new need, often without any standards committee, is a major force behind the Internet's success. To highlight this point, note that many of the protocols I've listed were standardized in the IETF after they were already in widespread use.
What's Wrong?
Let's now turn to the other side of the coin. The main issue with layering is that layers hide information from each other. We could see this as a benefit, because it reduces the complexities involved in adding more layers, thus reducing the cost of introducing more services. However, hiding information can lead to complex and dynamic layer interactions that hamper the end-to-end system's reliability and are extremely difficult if not impossible to debug and operate. So, much of the savings achieved when introducing new services is being spent operating them reliably. To make this point more clear, let's look at a case in which I was recently involved.
Company A was using a wireless machine-to-machine service for a mission-critical application and was experiencing a two-minute outage in one or two of their machine-to-machine systems every day — an unacceptable disruption considering the application's importance. This application had been built by third-party B, was hosted by a specialized cloud service provider C in yet another party D's data center, and was communicating with machines in the field using a cellular network E as the primary communication channel. Note that, in this case, every provider was meeting its service-level agreement (SLA). So, every layer was performing exactly as promised.
When confronted with this issue, the first complexity was to establish a cross-provider A-E team that was legally allowed to discuss this issue, in particular if it also involved information received from their various equipment vendors, covered under a multitude of nondisclosure agreements. This part isn't strictly a technical problem, but it does highlight that layering has progressed from the technical realm to the business realm, and that hiding information has a similar effect in preventing stakeholders from quickly and efficiently resolving business layer interactions. Thus, solving this problem might not only require technical but also legal changes that enable a quicker data exchange between providers to facilitate quick outage resolution.
Let's return to the problem's technical aspects: using all these providers, let's look at the packets as we might have observed them in the wireless provider's core network (the area with which I'm most familiar). The packet would have included the headers in Figure 1. Given that the application details are still somewhat fuzzy to me, I combined them in the "application" box. Closer examination would certainly uncover layers within this box as well. Focusing on the layers below "application," we can see not only that a substantial amount of bandwidth is wasted — for example, transmitting a single bit of information contained in a TCP ACK requires 800 bits of headers, or an 80,000 percent overhead — but also that those layers can have myriad unexpected interactions. To highlight this in more detail, let's examine some of the dynamics we might encounter at each layer.


Figure 1. Typical packet. The figure describes the layers of a typical packet as seen in the cellular access network of provider E in the use case discussed.

Starting from the bottom up, the Ethernet not only increasingly dominates the local network, but it's also being used for point-to-point connectivity in large core networks, replacing Packets over SONET (POS) and other more expensive technologies. Even though Ethernet point-to-point links seem to be rather stable entities, even this layer is dynamic. Core routers often need more capacity than a single physical Ethernet link can provide, so many links in a core network are implemented as bundles that are abstracted into a single logical link. Cisco calls them an EtherChannel, with other vendors providing similar functionality under different names. Having these multiple physical links in an Ethernet link leads to a load-balancing problem. Because TCP can't handle reordering in an efficient manner, even the Ethernet layer tries to preserve the ordering of load-balanced traffic. Thus, load balancing is often performed based on a hash of the IP addresses in the first IP header encountered (which, incidentally, breaks the layering concept because Ethernet isn't supposed to be aware of the IP layer). This in turn leads to unequal load on the physical links, resulting in cases where some traffic on that logical link has great performance and other traffic doesn't. Users can't always discover this at the endpoint because the probing done from the endpoint might or might not traverse the congested physical link.
The next layer is MPLS, which can add multiple MPLS headers. Core networks have used MPLS for more than a decade; it provides a set of labels that core routers can use to forward traffic without having to interpret inner headers, such as the IP header. In fact, MPLS has led to many core networks being entirely IP-unaware. The core's ingress edge routers determine the destination egress edge router for a packet and select an MPLS path based on cost or policy that will get the packet there. The core routers make no IP layer routing decisions any more — in theory, that is. Given the presence of multiple paths of logical links (remember, an Ethernet link is logical) with equal cost between two edge routers, the routers must still ensure that traffic is load balanced between those paths while preserving in-order delivery of most packets to maintain TCP efficiency. In practice, this means that most routers still look at the first IP header to hash the traffic with similar effects, as mentioned for the Ethernet load-balancing case.
The next three layers, IP, UDP, and GTP, provide the connectivity between the cellular network infrastructure's elements. Most cellular networks use a private IP network to connect their cell towers to the anchor point of the wireless user equipment's IP addresses — for example, your smartphone's IP address must have one point that knows the route to it so it can move around. Typically, this connectivity is managed using GTP, which establishes state on the cellular network elements and provides tunneling capabilities for the actual Internet traffic the end user sees. GTP is way too complex for me to cover here, so let me just highlight that this architectural choice adds dynamic elements onto the private network's first IP layer (IP routing) as well as onto the GTP level, which can dynamically select mobility anchor points based on load and policy.
At this point, we've reached the second IP header, which is the first IP header actually visible to the endpoints. The layering on top of this is more traditional. I'll only briefly mention that, in the example case, there was typical Internet routing on the IP layer as well as load balancing between multiple instances of the application.
Even though the description of each layer's dynamics is far from complete, it should highlight the issue A-E faced when asked to debug a two-minute outage occurring quite infrequently. Which of those dynamics caused this issue? How did they cause it? Each of those layers collected statistics individually and reported proper operations — in fact, they were meeting SLAs. It took quite some time to get to the bottom of the problem. In the end, the root cause of the outage's duration was a complex interaction of a small 1–2 second delay at the network layer triggering timeouts at the application layer that caused the intermediate layers to reestablish their state in a very inefficient manner. The fix was rather simple.
Despite finding the root cause in this one case, it's clear that for most Internet applications, people won't be willing to build cross-provider teams that spend substantial resources to determine the root cause of two-minute service outages occurring for one endpoint once a day. Thus, we must place more attention on research into automated cross-layer debugging. It's much easier to write a research paper focusing on one layer, but I hope somebody takes up this challenge. Otherwise, we'll have to resign ourselves to having random outages at the service layer with nobody knowing why.
Oliver Spatscheck is a lead member of technical staff at AT&T Labs — Research, where he received the AT&T Science and Technology Medal in 2007. His research interests include content distribution, network measurement, and cross-layer network optimizations. Spatscheck has a PhD in computer science from the University of Arizona. He has published more than 50 research articles, received 39 issued patents, and coauthored a book on Web caching and replication. Contact him at spatsch@research.att.com.
54 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool