The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - September/October (2002 vol.6)
pp: 38-40
Published by the IEEE Computer Society
ABSTRACT
<p>Apart from an increased focus on security and the additional server software needed to support the emerging standards, the logical architecture of data centers should be able to meet the demands of these new applications without major changes. Important lessons for the future are thus available in today's solutions for global data center deployment.</p>
Data centers first arose from IT efforts to consolidate the server and storage resources of one or more enterprises, so they could be centrally managed and shared among various departments and functions. IT architectures involving multiple data centers distributed around the globe exist in part due to the evolutionary nature of enterprises and, in part, by design.
By effectively utilizing distributed resources, enterprises seek to improve their services' scalability, availability, and responsiveness. Scalability here denotes the ability to extend the resources available to a service beyond a single data center's boundaries in order to address growing user demand. High availability means ensuring that users have continuous access to a service despite faults in systems, networks, hardware, or software — even when caused by natural disasters or human error. Responsiveness refers to short and predictable turnaround times on service requests; it is usually improved using affinity. A site in Singapore, for instance, might field all service requests originating in Southeast Asia.
Architects of global-scale services often achieve all three objectives using load balancing to distribute the task of servicing Internet requests among servers at multiple sites. Load-balancing algorithms dynamically adjust the number of sites, and the number of servers used at each site, in response to changes in demand. They are programmed to work around unreachable sites, and they account for affinities between Internet requests and the sites available to service them.
With large enterprises continuing to develop and deploy globally distributed content, it is important to understand the ecosystem for global-scale applications. At the base, a well-understood and widely adopted tiered architecture determines how to arrange servers and routers within a data center. At the top of the ecosystem are the applications, which today primarily include Web content and first-generation Internet services that follow the client-server paradigm, such as mail and chat. The articles in this issue discuss the two broad approaches generally used to deploy services globally across multiple data centers. In the future, most applications will likely be Web services, which involve wide-area server-to-server peering. The Internet community is actively working to define a comprehensive architecture for developing and deploying Web services. Apart from a greater focus on security and the additional server software needed to support emerging standards, however, the logical architecture of data centers should meet the demands of the new applications without major changes. Important lessons for the future can thus be found in today's solutions.
Application-driven Deployment
As enterprises begin mastering the art of global data center deployment, a new class of applications is emerging to take advantage of the interconnected infrastructure. From NASA's Information Power Grid ( www.ipg.nasa.gov) to the storage utility projects at various service providers, numerous efforts are under way to pool global computing and storage resources and deliver them to users as single unlimited, highly available virtual resources on the Internet.
Developers are also using now-abundant development kits to construct location-aware Web services aimed at mobile users to deliver context-relevant information anywhere in the world. Large enterprises are also planning massive business-process automation efforts to link their internal applications with those of their partners, suppliers, and customers. Successful deployment of such services will require servers at multiple sites to meet scale, availability, and responsiveness needs.
Challenges of Global Deployment
Emerging applications bring new challenges in security, interoperability, and portability. The identity-based management used by first-generation Internet services will prove inadequate for controlling programmatic access to server objects for many new services. Instead, the infrastructure will need to associate privileges with roles. Interoperability issues arise when Web services are built from independently developed components and deployed at locations around the world. These issues will require standard protocols for advertisement, discovery, and communication between components. Portability is another key challenge because a service developed and tested at one location might be deployed at servers with different software and hardware configurations. This calls for a standard Web service platform supported by servers at all sites participating in a service deployment. Web services security architecture and standards are least developed, but the community has made substantial progress toward defining standard protocols for interoperability and standard platforms for portability.
Emerging Standards
The emerging platform standards — Java 2 Platform, Enterprise Edition (J2EE) ( java.sun.com/j2ee/), ECMA ( www.ecma.ch), .Net ( www.microsoft.com/net/), and Globus ( www.globus.org/ogsa/) — raise the level of platform definition: Unlike traditional operating system APIs, these platforms are defined in terms of highly portable languages and runtime environments, supported by higher-level APIs for accessing native system resources and communicating with other (globally distributed) components.
New standards for communication protocol and data formats — SOAP and XML, respectively — enable developers to create component-based, wide-area distributed applications while allowing components to exchange highly structured, semantically enriched data. The Web Services Description Language, Service Location Protocol, and Universal Description, Discovery, and Integration will help standardize service advertisement and location.
The J2EE and .Net standards specify the Java Virtual Machine (JVM) and Common Language Runtime (CLR) environments, respectively. Both have features to address Web service portability issues arising from differences in server software configurations at different sites. The Enterprise JavaBeans portion of the J2EE standard, for example, defines deployment descriptors that let developers specify the requirements a service component places on the operational environment. Similarly, .Net defines assemblies, which are used for deployment and versioning. Using CLR, developers can also install different versions of an assembly side by side on the same server. Such features reduce the likelihood of deployment failures due to mismatch between a service's development and deployment environments.
Standards are in place not only to allow organizations and individuals to independently develop components for powerful new Web services, but also to ensure easy deployment at data centers — as long as servers comply with platform standards and support the standard communication protocol stacks.
The Articles
Given the economic climate in our industry, it is not surprising that there are currently many more ideas than successful implementations. In selecting articles for this issue, we avoided obvious and overly speculative solutions, focusing instead on forward-looking analyses building on what exists and works today. Three of the articles in this issue describe system and network architectures for Internet services. The fourth explores security issues.
In "Architecture and Dependability of Large-Scale Internet Services," Oppenheimer and Patterson describe the architectures of three commercial Internet services. Their analysis of service availability reveals that user-visible outages are rarely due to component failures in systems inside data centers; the root cause is usually a network problem or operator error. They also note that today's Internet services, developed before standards for wide-area deployment were mature, call for tight coordination between developers and operators — an issue that adversely impacts availability.
Dilley et al. describe Akamai's platform for global delivery of static and dynamic content in "Globally Distributed Content Delivery." The authors describe the detailed architecture of a hierarchical DNS-based server-selection facility that exploits the late binding between domain names and IP addresses to achieve both the high-availability and responsiveness objectives of global deployment.
With "Building a Multisite Web Architecture," Steel takes us on a journey from design to deployment with a large data center. He describes practical considerations governing the selection of servers, networking equipment, and server software. Unlike Dilley, Steel describes a proprietary solution to deploying replicated content; it is more tightly integrated with the content-production environment. He also describes the basics of testing, securing, scaling, and administering the hardware and software infrastructure for a large-scale Internet service.
Finally, Fürst et al. highlight the role of security in Web-services-based multi-enterprise collaboration in "Managing Access in Extended Enterprise Networks." The authors advocate a role-based approach to authorization and authentication and describe two different architectures for security, each suitable to a different level of service complexity. Their approach exploits XML for interoperability and XML ontologies (either translated or shared) for role definition. They also outline an open-source implementation.
In selecting the theme articles, we avoided picking between competing platform standards. Instead, we opted to broadly explore approaches to working with large-scale services and systems. The sidebar, " Physical Architecture of Data Centers," gives pointers to related technologies that are not covered in the following pages.
Pankaj Mehra is principal member of technical staff at Hewlett-Packard NonStop Labs. His research focuses on the design of systems and networks for large-scale enterprise applications.
18 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool