« Return to article page

Letters to the Editor

Tom DeMarco's essay inspired several readers to send in letters to the editor. To foster discussion in the software engineering community, we offer them here. Read additional comments here. Send letters to the editor to dstrok@computer.org.

 

Editor,

Like Tom DeMarco, I once had high hopes for software engineering. My disillusionment also started with the suspicion that software engineering was not addressing the issues that matter most; instead, its practice seems to have fallen under the thrall of an attitude I call "proceduralism." More than just an emphasis on procedures and standards, it's the belief that all our problems can be solved if we just adopt sufficiently detailed procedures, and in its extreme form, it's a refusal to recognize that there are any problems that cannot be resolved this way. Control may not have become an end in itself, but there seems to be an assumption that it can be made into a sufficient condition for success.

Metrics are as good an area as any to see how this happens. From the belief that measurements are important, it's a slippery slope to "if you can't measure it, it doesn’t matter." (I have heard this said.) I once read a paper, from a respected development organization, in which its authors claimed the problem of software quality had been solved. On further reading, I found that they had defined quality in terms of various metrics, none of which directly addressed the customers' quality concerns, and none of which were sensitive to certain important issues, such as whether the semantics of the software matched the domain. (Coincidentally, the June issue of Computer published Tetsuo Tamai's analysis of a costly error in the Tokyo Stock Exchange's trading system, where invalid semantic assumptions and dependencies were an important contributory factor. This was just a particularly harmful case of a pervasive problem that cannot be eliminated through rote or automated methods: I do not know if the code in question would have triggered any quality warnings, but I think I can safely say that if it did so, it would have happened coincidentally rather than for the reasons that made that code so harmful.)

It does not help to claim that this definition of quality is what the term means in software engineering; redefining a problem to exclude the difficult cases is the opposite of solving them, and is an instance of the more extreme form of proceduralism.

Fred Brooks, in his well-known "No Silver Bullet" article, identified much the same problem: his accidental problems are generally amenable to a procedural resolution, while the essential ones are not. What worked in the past is running out of low-hanging fruit to gather (although I'm always bemused by the number of development organizations that still lack effective automated regression testing, to pick just one anachronism). In discussing possible solutions to the hard problems, Brooks advocates more emphasis on the skills of the individuals actually creating the software, noting that some are much more effective than others. This is the opposite of what software engineering seeks, which is to suppress the influence of this variability, which leads to a form of Frederick W. Taylor's Scientific Management theory that is inappropriate to an activity like software development.

Having said this, I'm not entirely in agreement with Mr. DeMarco. I think the goals of being able to reliably produce high-quality software and to predict what it will take to do so are worthwhile, even if they will not, by themselves, lead to society-transforming developments. Bad software is the enemy of innovation, flexibility, and alacrity, which makes the issues I've raised a concern for people working on bold new ideas, not just those seeking modest returns from incremental change. We must surely also recognize that all sorts of software exist: we should be cautious about applying methods devised to develop Web businesses to the creation of fly-by-wire aircraft control systems. And in the latter case, I'm willing to accept that controlled procedures are a necessary, though never sufficient, condition for success. I think there may be a way forward for software engineering, but it will not be as yesterday's discipline.

Andrew Raybould
Thales Fund Management
andrew dot raybould at gmail dot com


Tom DeMarco responds:

I particularly like your phrase "redefining a problem to exclude the difficult cases is the opposite of solving them."  I'm also taken by your mapping of what is and what should be the guts of an engineering discipline to Brook's accidents and essence. On the whole, your piece seems more moderate and perhaps more realistic than mine. But I had a slightly different goal that required a bit of shock and awe.


Editor,

I read your article with interest and was struck by an eerie thought: what if software projects were incentivized like Silicon Valley startups? Motivation is the key ingredient that software engineering as currently constituted does not address. In your article, you distinguish between projects that deliver $1.1 million and $50 million in value, but you don't state who gets the value or how that feeds back to motivate the team leadership to make appropriate decisions. There are several groups of stakeholders for any new technology: those who invent it, those who build it, those who pay for it, those who sell it, and those who will use it.

In Silicon Valley, that dream of the giant IPO is a huge motivator in dealing with the realities of venture capital funding. Commercial, defense, and other major corporate projects don't have that kind of alignment, and that may account for much of the performance disconnect we've been seeing for the last few decades. In defense contracting in particular, all the stakeholder groups are distinct and have limited impact on decision making by any of the other stakeholders, resulting in disintegration of alignment rather than integration of purpose and performance.

Have you seen any good studies on the impact of good motivational alignment on software engineering outcomes? If you have, please send me some citations.

Bob McCann
staff systems engineer
Lockheed Martin Aeronautics
Bob.McCann at lmco.com

P.S. It's possible to measure the ROI of effective metrics. Without metrics, it's impossible to counter mistaken beliefs distinguishing among good, excellent, and moderately poor performance. See my Crosstalk articles "How Much Code Inspection is Enough?", "When Is It Cost Effective to Use Formal Software Inspections?", and "The Relative Cost of Interchanging, Adding, or Dropping Quality Practices". The answer? Nearly always: even multiple inspections will pay dividends, especially in security evaluation.

Tom DeMarco responds:

What a thoughtful response. Your observation makes a mockery of a traditional cost/benefit study. I've never thought about it before, but your logic suggests that there need to be as many cost/benefit studies as there are payers/benefitters. And all of them have to make sense in order for the project to be reasonable. Thank you for the insight.


Editor,

As always, Tom makes some good points. But if risk analysis and value analysis are added to the discussion, the conclusions are the opposite of Tom's.

Applications in the US$1 million cost category have about a 50 percent chance of failure or not being delivered at all. This means that Tom's Project A with low value is too risky to even attempt.

For Project B with high value, to ensure that it actually gets delivered and generates value, at least quality controls and change controls are mandatory. In fact, the more valuable the software, the more quality and change control are needed.

If the software is delivered with high numbers of latent bugs, it won't yield the projected value—assuming it gets delivered at all.

I often work as an expert witness in cases involving software failure. Poor quality control, poor change control, inadequate estimates, and poor status tracking are the main reasons that software projects fail.

Capers Jones
Founder, Software Productivity Research
capers.jones3 at gmail.com


Editor,

I think Tom DeMarco is saying something very important, but saying it so softly that many might miss it.

Let me rephrase it:

  • We need to use the best means we can find, even software engineering—revisited—to shift our focus away from narrow control over project time and cost alone.
  • We need to shift to value for money.
  • We need to shift to a value focus.
  • We need to learn to engineer value (for money) into our systems.
  • And we have clearly failed to do so thus far.

The key is the general ability to quantify both business and project/product values. This is not at all difficult if we really try. Not all values translate directly into money, but we can still engineer and manage them. (See Gilb's law in Tom's book Peopleware.)
If readers would like more depth on this from my point of view, see  "Quantifying Stakeholder Value" and "Value Delivery".

Tom Gilb
Senior Partner, Result Planning
tom at gilb.com


Editor,

It was interesting to read Tom DeMarco's short piece in IEEE Software. His willingness to look at the past, reflect, and adjust past views from the perspective of accumulated experience increases his stature in my eyes.

Through most of my career, I have struggled with  pressures for more and more metrics and control systems, and against the reality of software development, where excessive control stifles creativity and productivity. Teams often learn to meet metrics compliance over good software design, as rewards revolve around metrics. My usual response to such pressures has always been "how is it implemented?" and 'how does it help the team to do better?" And wherever there has been a reasonable response to these questions, we were able to implement something pragmatic and positive overall.

DeMarco's example of two projects resonates as well. My own approach has been to create enabling project structures and dependency agreements in the beginning of any project. After that it has always come down to risk management—anticipating and resolving risks.

On a slightly different note, while software development costs for an organization overall are not small, the costs to the individual developer in terms of work have come down considerably. You are always likely to get better engineering and more elegant design and code if you have to work with 64k of memory and it takes precious time to compile and debug, than if you had gigabytes of memory, very fast processors, and could instantly compile and test. Ironically, the developments in hardware, tools, etc. has led to a philosophy of "write it and lets see if it works" as opposed to "let's make sure it will work when we compile."

What is probably needed now are boundary controls or those that clearly spell out what software development should strive for. Reusable modules, one single version of the truth, consistent UI, consistent math, no memory leaks, and so on, and so on. If we agree that software engineers are self-driven, imaginative people, we would rather give them goals than tell them what to do and how to do it. This may help keep the processes light without sacrificing any of the goals of the discipline. Indeed, as my daughter enters her teens, this is precisely what I might do.

Kannan Vijayaraghavan
Athreya Associates
Singapore
kvijayaraghavan at gmail dot com


Editor,

After publishing Controlling Software Projects: Management, Measurement, and Estimation (Prentice Hall/Yourdon Press, 1982) 29 years ago, Tom has come to the conclusion that software metrics are not—yes, not—a must for any successful software development effort. He is also sure that the advice given in that book was not correct then and today those metrics are irrelevant. The purpose of metrics is to control based on the assumption that "you cannot control what you cannot measure." He then gave an elegant example to explain where control is required. The essence is this: where value to cost approaches unity, one needs control; and where value to cost is very high, control is not important. He then raises an important question: "Why on earth are we doing so many projects that deliver such marginal value?"

He then suggests agile methods or an incremental approach to software development to deliver the high-value core early and delivering lesser and lesser-value functionalities in subsequent increments. He then attributes software failures in the last 40 years to the importance given to consistency and predictability. He then advises that more importance be given to creating software that changes the world and the way business is done. He finally concludes that “software conception” needs more focus than "software construction."

Tom's confession reminds me of another episode. In Congressional testimony on 23 October 2008, the American economist Alan Greenspan admitted fault in opposing the regulation of derivatives and acknowledged that financial institutions didn't protect shareholders and investments as well as he expected. His 40 years' belief in certain free-market concepts failed. Why was his theory supported by banks, the public, and governments? Because it helped everyone get what they did not deserve, and in the short term everyone found only benefits—except those taxpayers who were neither involved nor defaulted. No one discussed the ethical aspects, long-term consequences, or dissenting opinions.

Tom then asks, "Why on earth are we doing so many projects that deliver such marginal value?" It's an interesting question. There are several reasons:

  • Business/software industry are under continuous pressure to show unsustainable "growth" for each quarter for decades.  This forces them to identify new markets, identify problems, propose solutions, deliver solution and earn money for marginal value.
  • Developers were working on a "rate" basis that discouraged the value proposition. Except for the taxpayers, all others involved in the project were benefitted.
  • "Cost plus" and "rate" contracts benefit from elaborate software metrics, construction, testing, quality assurance measures, and maintenance rather than from software conception.
  • The incremental concept reduces the time and budget needed for product development. It eliminates many experts and costly tools, as things are simplified.
  • Those who conceive software projects are too far away from reality.
  • Most stakeholders are working for money only.

I would like to offer an experience from my own experience. It was in 80s. My earlier company, Mecon Limited, India, wanted to develop a "Blast Furnace Tuyere Leakage Detection System." I took up the challenge and studied the system requirements. I also studied the state of the art in this field. I found there was only one company providing a solution from Germany, and it had a purely electronic hardware solution. I found the system's sensors (magnetic flow meters) needed to be highly accurate (0.2%), and they contributed a major part of the system's total cost. Normal sensors were available with 0.5% accuracy. My study revealed the sensor's resolution is better (0.1%) than its accuracy. I decided to use a PC and a low-accuracy-sensor-based solution. Using software, I decided to calibrate the instrument so that the measurement accuracy could be close to 0.1%. In those days, the lifecycle paradigms were not known. I decided to prove the core concept in the first stage so that management could be convinced that a solution was feasible but would need further improvement. In subsequent increments, I decided to improve the accuracy of measurement via software. My proposed concept appealed to management because it assured low investment and little lead time. Finally, they approved my proposal. In the first phase, my team members and I developed a solution that could detect leakage using flow meters and a PC.  Later, we developed a suitable algorithm to collect online data and create a calibration table. It was later successfully demonstrated at the Rourkela Steel Plant.

I later applied this incremental approach in other successful R&D projects. This approach works only when people are genuinely interested in an early solution with a small budget. When management approached me for a solution to a industry problem, I promised to demonstrate a feasible concept within a very short period—for example, in less than four weeks. Regular research scholars opposed this approach because I was demonstrating the feasibility of a solution in four weeks. I did not propose elaborate training, attending conferences, or long study. None of my team members were PhDs. None of them were recognized research scholars. Yet, we delivered solutions.

Our profession has to learn the following lessons from Tom:

  • Allow dissenting opinions in publications (journals, conferences, seminars) even if they do not come from eminent personalities. Popular opinions should be readily available in several different media, but dissenting opinions should also be kept in the repository so that they're available in searches and for future reference.
  • Let time decide who is right.
  • Assess its value to society at large rather than to individuals, teams, organizations, or governments.
  • If we cannot monetize an intangible benefit, let us not discard it as useless. Time, our experience, and maturity may not be sufficient to make a proper assessment today.

We, and every member of IEEE, must appreciate Tom DeMarco's courage and honesty to express his reflections. I hope more and more engineers come forward and honestly review their life's contributions—this is very important to our students and young professionals.

R.T.Sakthidaran
Academic dean and head of IT,
KLN College of Engineering, Madurai, India
sakthidaran at gmail dot com


Editor,

In his recent essay, DeMarco presents his view that the purpose of software engineering should be to create software that changes the world or transforms a company or how it does business—that is, produces high impact. He argues that the metrics-centric focus of software engineering, and by extension the focus on controlling software, is unnecessary for high-impact projects. While agreeing that an excessive focus on metrics to the detriment of a project's success is folly, we believe that it’s important to frame the argument to the circumstance.

DeMarco infers that control means "consistency and predictability.' We counter that control is still important for software projects, in particular, for those of high impact that DeMarco suggests we focus on.

DeMarco asserts that control is not an important aspect of software projects, using Wikipedia and Google Earth as examples. For projects where failure may result in severe consequences, we contend that control is essential. Without control, the likelihood and impact of risks is unknown and therefore an effective risk mitigation strategy is nigh impossible.

DeMarco claims that control is only really important on projects with low return on investment (ROI). This conclusion, it seems, suggests that if a project with low ROI is over budget, the net return is likely to be lower than the final development cost (so it would be a loss). No account for risks involved in such projects is considered. Nor is the fact that projects with high ROI often involve very significant risks. It is not immediately apparent that control is "almost not at all important" for projects with high ROI.

ROI is not limited to money. ROI can include scientific advancement (the Mars Lander, the International Space Station, and the Large Hadron Collider), better defense (the Joint Strike Fighter), knowledge accessibility (Google Scholar), and more. As an example, the development of the Ubuntu Linux distribution was a high-risk project with (relatively) low investment, high control, and high return. It's not clear to us that there are so many projects delivering marginal value, as DeMarco suggests. Perhaps a better question might be "Are software engineers developing projects that have a reasonable ROI for the risk involved?"

We disagree with the notion that projects where control isn’t necessary are preferred, but agree that expectations on the ability to control need to be reduced. There is a point when increased measurement and control result in diminishing returns.

DeMarco's reference to raising a teenager and the apparent connection with managing a project seem contradictory, but would be more compelling if he included phrases like "quantitative/qualitative measures” and “degrees of control."

We find DeMarco's definition of software engineering as a particular set of activities somewhat absurd. All engineering has various sets of activities, all (usually built up over time) in order to help reduce risks, cost, and/or schedule and to increase quality. Their goals are different.

The goal of software engineering should be to create software with maximum impact. However, failing to balance the cost of investment with the return may well move a project from being "useful to people" (positive impact) to a drain on resources (negative impact).

Luke Nguyen (luke dot nguyen-hoan at cs dot anu dot edu dot au) and Alvin Teh (alvin dot teh at cs dot anu dot edu dot au)


Editor,

In his Viewpoint essay in IEEE Software's July/August issue, Tom DeMarco makes several valid points concerning software project measurement and control. Unfortunately, his conclusions are of limited applicability and overall probably incorrect for most software engineering work.

More specifically, more metrics is certainly not better. But he seems to miss the point that qualitative measures can often be used successfully and that the precision of the measure might not be the prime selection criteria. Managers work with qualitative measures all the time, often informally obtained, but nevertheless measured. The same applies to many personal and psychological measures in the "unsettling analogy" described in the article.

It's also evident that software engineering is "somewhat experimental," as we noted in our IEEE Software paper "Misleading Metrics and Unsound Analyses" in the March/April 2007 issue. The domain envisaged in the article appears to be in-house new technology development, but if we consider defense systems contracts or probably almost any contract or third-party development, it will be difficult in the extreme to sell a message of unknown cost, unknown schedule, and unknown delivery. There are alternative measurement and control mechanisms in the literature such as incremental commitment that offer more realistic control and measurement activities for many systems development projects of this type.

Ross Jeffery
Professor of Software Engineering, CSE at UNSW
Empirical Software Engineering and Project Leader, NICTA
Ross.Jeffery at nicta dot com dot au


Editor,

I found Tom's article interesting and somewhat strange. Tom doesn't define what engineering software means to him—only "what software engineering has come to mean."

I have always understood the term to mean applying engineering methods to the development of software—which to me means choosing the methods that would be most effective in solving the problem. So I have always (naively perhaps) equated software engineering with software development. Some do it effectively and are able to deliver quality software within their budget and time constraints. Others are less proficient, just as with other engineering disciplines.

With my definition, the article title would read "Software Development: An Idea Whose Time Has Come and Gone?" This is well laughable today. Maybe sometime in the future, artificial intelligence will self-generate applications, but we’re not there yet.

The question that bothers me is this: Why has software engineering come to mean "a specific set of disciplines including defined process, inspections and walkthroughs, requirements engineering, traceability matrices, metrics, precise quality control, rigorous planning and tracking, and coding and documentation standards"?

Winifred.Menezes
Expert Consultant at DNV ITGS
Winifred.Menezes at dnv dot com


Editor,

 

While I agree with Tom DeMarco's conclusion that software should transform the way we live, you still must engineer the software so it works. And for that, you need to take a few measurements. For example, how do you know when you're finished testing the software? Perhaps you track defects found during peer reviews or test events. How can you tell if your development processes are working? As he states, you can't control what you can't measure, and gut feel isn't a good indicator of whether the spiffy new software development tool you just purchased is making your software better. Even more important, how do you know when to pull the plug on a project and go back to the drawing board?  I would strongly recommend you have some objective performance measures before you cancel a program.

As DeMarco says, software engineering is more than measurements. Check the Software Engineering Body of Knowledge (SWEBOK) for other software engineering knowledge areas such as requirements, design, testing, configuration management, etc. that are just as relevant today as they were 40 years ago. One need only look at the Standish Group's Chaos Report or the Risks column in Software Engineering Notes to see the impact of a lack of software engineering discipline. Most of the reasons behind the Chaos and Risks failures trace directly back to lessons learned while engineering software and documented in SWEBOK references. I am often involved in "Red Team" work, assisting in investigating issues surrounding software development programs that are underperforming. Time and time again, we see the same violations of good software engineering practice as the root cause of many of these egregious overruns. We often joke that we can just use the SWEBOK Guide paragraph numbers as a final Red Team report.

Perhaps DeMarco went a bit overboard in his 1982 book, but not by much. Smart software measurement and the management skill to know what to do with those observations is still essential to an effective development program.  Sure, you  can prototype and experiment, but when you have to build a reliable, working system without breaking the bank, you have to measure.

I think there are others that agree with me. I find it humorous to note that the Computing Now web page that reprinted DeMarco's IEEE Software essay (http://www2.computer.org/portal/web/computingnow/software) offers a link under related products to Introduction to Applying Software Metrics and A Practical Metrics and Measurements Guide for Today's Software Project Manager, both by the current IEEE Computer Society president, Kathy Land.

Mark Doernhoefer
IEEE CS, ACM
mdoernho at acm dot org