, Cisco Systems
, Iowa State University
, University of South Florida
Pages: pp. 10-11
Simply mentioning the term data center design conjures up different images to different audiences. To corporate security personnel, it could mean the physical access controls that secure the data center building itself; to facilities operators, it could be that building's power and cooling mechanisms. Network equipment vendors see data center design as focused on racks of communications equipment and cabling/patch panel strategies, whereas software and server vendors focus on the value add that their respective technologies bring to the table. But even the decision about a data center's geographic location involves diverse factors, such as weather and infrastructure. Once designers address all the physical considerations, their initial decision of a software or hardware vendor can influence additional features, such as virtualization, ease of scalability, and disaster recovery. Some subcategories, such as green computing, touch multiple facets of the data center, from its physical layout to the operation of its servers and network switches.
However, the fundamental objective of all design facets boils down to a simple business prospect: maximizing efficiency while minimizing costs. Indeed, it's almost common sense, but given the breadth of scope involved in data center design, each aspect can have different benchmarks and performance measures. On one hand, the facilities aspect of a data center resembles physical infrastructures such as plumbing and electrical wiring—something done during a building's initial installation. On the other, the data center's need to directly affect an enterprise's operational efficiency and accommodate its growth means that its design must be dynamic and scalable. Thus, the ultimate challenge is how to remain budget conscious during the initial rollout while maintaining the ability to expand and change down the road.
This issue provides a diverse sampling of the disparate subcategories of data center design. The first article, "Return on Infrastructure: The New ROI," covers techniques to consider when determining the cost/benefit of capital equipment. Referring to the dynamic nature of data center design, it's important to quantify how long a particular system setup will remain relevant.
In addition to general capital equipment analysis and operation, the software is what provides a data center's actual "application support." The second article, "A Multiple-Criteria Approach to Ranking Computer Operating Systems," provides a case study describing a methodology for selecting a software vendor and the impact that it can have on the maintainability perspective. These concepts apply both to desktops as well as to data center servers, with the density of servers in a typical data center compounding the impact.
The third article, "Optimizing the WAN between the Branch Office and the Data Center," discusses some techniques for managing branch and remote sites and their connection to the data center.
The final article, "Greener PCs for the Enterprise," describes methods for reducing the energy consumption of enterprise PCs, thus closing this theme with a full gamut of energy issues, from data center PCs to branch office PCs to enterprise desktop PCs. The article also highlights the fact that data center design itself can involve looking beyond the four walls of the physical data center to influence all operational aspects in the enterprise.
This set of articles merely scratches the surface of data center design and the enterprise infrastructure that data centers support. This issue is intended to demonstrate the wide array of considerations involved. Our hope is that we can provide a recurring theme in this magazine that covers both breadth and depth in tracking the current best practices in data centers.