Issue No. 05 - September/October (2007 vol. 9)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MITP.2007.88
Imagine an argument about which boxer—Muhammad Ali or Joe Louis—was better. Now, many elaborate simulation models have been contrived to settle this argument. You could argue superiority based on particular fighting characteristics. Some have modeled these characteristics to create simulated fights. Still others have looked at their results against similar opponents. You could even argue about training techniques and the quality of their managers, trainers, and promoters. But there would be no conclusive evidence that one fighter was better than the other or that they are on even terms because there are no direct metrics available to make such a value judgment.
Now imagine an argument involving two IT projects—A and B—in your portfolio. Which project is better? Like the boxing debate, you can create many simulation models (a constructive cost model, for example), argue superiority based on a number of qualities (think balanced scorecard), use value judgments from panels of experts, or use any number of measuring tools. You can also look at process qualities in which the software was produced or use certifications where applicable. No matter how you do it, though, it's difficult to make such a comparison.
Let's assume that project B was better than project A. How are you going to improve the quality of project A? This leads to further questions such as, what do you mean by quality, either in characteristics or process, and how are you going to measure it? It's rather disappointing that as IT professionals we're often no better in taking on these comparisons than two boxing pundits arguing in a bar.
Trying to answer these kinds of questions helps us understand why enhancing IT quality through process improvement is so difficult—metrics for appropriate qualities are so hard to find, after all. Measurement is an explicit step in all quality improvement frameworks. For example, the Six Sigma quality improvement methodology embodies measurement in the define-measure-analyze-improve-control sequence. And the capability maturity model (CMM) requires that all reasonable software processes be measurable.
But the quality question involves more than just measurement, as both Six Sigma and CMM explicitly describe. IT organizations of every kind seek enhanced quality in their delivered systems and services. But are the organizations providing optimal support to the organizational mission?
This special issue of IT Professional features two articles that highlight the critical role of planning in achieving higher levels of quality and improved processes in IT.
In "Planning and Executing IT Strategy," Rupert A. Hayles, Jr. provides a fresh look at planning in the business-IT interface. His notion of the alignment link keeps the business needs visible to IT. There is no higher indicator of quality than having IT systems squarely in the value chain of the organization's products and services. Following Hayles' advice on how to manage the business-IT relationships can help avoid the ultimate undesirable outcome for the IT unit: the customer service death spiral. Recognizing the early warning signs of the death spiral can help IT make the midcourse corrections it needs to move back onto a productive path.
Any discussion of quality moves quickly to talking about customers. The question becomes this: have your systems and services satisfied your customers' needs? Quick-and-dirty definitions of quality—such as, quality means never having to say you're sorry—echo this theme. Glossed over in these discussions is that customers come in many flavors. Precisely whom are you trying to satisfy? Is it the person funding the project? Is it the person using the resulting system? Or is it someone else entirely? Hayles breaks down the confusion about customers, identifying the various customer-like roles that we typically find in IT projects. Understanding these roles and how to satisfy the people who fulfill them can go a long way to improve quality.
The planning-for-higher-quality theme is further explored by Scott E. Donaldson and Stanley G. Siegel in their article, "Enriching Your Project Planning: Tying Risk Assessment to Resource Estimation. "The authors' extensive practical experience as large-system architects and managers comes through in their recommendations for improving the planning process.
Donaldson and Siegel acknowledge the need to keep risk in the forefront of considerations during the project. However, they challenge the conventional notion of developing and then—the difficult part—maintaining two distinct plans: project implementation and risk management. Experienced managers will likely relate to the challenges of keeping the two plans articulated over the duration of the project. Instead, the authors call for the risk considerations to be directly introduced into the resource estimation process used in project planning. In this way, the cost and schedule estimates have already incorporated the risk issues in the project. Their risk-derived resource allocation process provides an innovative and realistic approach to IT project planning.
We hope this special issue of IT Professional offers you useful approaches to planning that can ultimately improve customer satisfaction, system and service quality, and the resource estimation process.
Phillip A. Laplante is a professor of software engineering at Pennsylvania State University's Great Valley School of Graduate Professional Studies. Contact him at firstname.lastname@example.org.
William W. Agresti is a professor of information technology in the Carey Business School of Johns Hopkins University. Contact him at email@example.com.
G. Reza Djavanshir is an associate professor of information technology in the Carey Business School of Johns Hopkins University. Contact him at firstname.lastname@example.org.