Issue No. 03 - May/June (2004 vol. 21)
Many of us have heard someone say, upon completing a course of study, "Now I know exactly how much I don't know." Even Thomas Jefferson is said to have observed that "he who knows best knows how little he knows." I've seen this phenomenon displayed by both individual programmers and entire software development organizations throughout my career in the software industry.
Invariably when I visit an organization and am told that they have the "software problem" under control, I can count on both an uphill battle over making any meaningful changes and a good dose of grousing over "naive customers" and "unreliable contractors." Conversely, I'm often pleasantly surprised when I encounter an organization that seems almost embarrassed to explain their software development process to me—they'll often say this is how they currently do things but hasten to add that they're trying to improve one aspect or another. More often than not, while they might not be perfect, they're better than the vast majority of professional software development organizations out there. Likewise, we've probably all run into hotshot programmers who speak in bits and bytes, promise the moon, but somehow never seem to deliver their work products (at least ones that actually work) on time.
I never really thought of this as some kind of universal pattern. Rather, I simply assumed that it was the luck of the draw, and I continued to take both organizational as well as personal self-assessments of capabilities at face value. After all, we're taught that self-confidence is important, and to show doubt in your capabilities is the equivalent of hanging a sign that says "INCOMPETENT" around your neck.
Unskilled and Unaware
My past assumptions now seem to be incorrect. A friend recently brought a paper published in a well-known psychology journal to my attention: "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments" ( Journal of Personality and Social Psychology, Dec. 1999, pp. 1121—1134; www.apa.org/journals/psp/psp7761121.html). This article, by Justin Kruger and David Dunning, is fast becoming a classic in the field of psychology. It's also brought all my past experiences into crystal-clear focus and provided me with a deeper understanding than ever before of why some people and organizations just never seem to "get it."
Kruger and Dunning point out in their paper:
We argue that when people are incompetent in the strategies they adopt to achieve success and satisfaction, they suffer a dual burden: Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it. Instead, … , they are left with the mistaken impression that they are doing just fine.
In other words, people (and organizations) that aren't too good at what they do often don't realize it by virtue of the fact that they aren't too good at what they do.
The paper presents a series of studies that analyzed participants' self-assessments of their appreciation of humor, their ability to apply logical reasoning, and their understanding of English grammar. In short, they found that incompetent participants rated their relative performance higher than was really the case, while their more skilled colleagues tended to underestimate their relative performance, if for no other reason than believing everyone else did as well as they did.
As we'd expect, the researchers also found that with training, the originally incompetent participants' reassessment of their performance was closer to their real performance—in other words, not only had their performance improved, so had their ability to predict their performance. Importantly, their ability to predict their performance didn't necessarily arise from actually performing better, but perhaps simply because of increased knowledge.
Software Development and Psychology
This is all fine and good, but what does a psychological study have to do with developing software? Kruger and Dunning point out that "incompetent individuals, compared with their more competent peers, will dramatically overestimate their ability and performance relative to objective criteria."
Interestingly enough, we find that as software development organizations become more mature in terms of their CMM levels—for instance, moving from Level 2 to Level 3—the frequency of budget and schedule overruns decreases. For example, Patricia Lawlis, Robert Flowe, and James Thordahl compared the deviation between planned and actual costs for 31 projects performed by a variety of companies at CMM Levels 1, 2, and 3 ("A Correlational Study of the CMM and Software Development Performance," Crosstalk, Sept. 1995, www.stsc.hill.af.mil/crosstalk/1995/09/Correlat.asp). Their results suggest that "as an organization matures from Level 1 to Level 5 … variability of the actual results about the target decreases, i.e., performance becomes more predictable."
The elimination of frequent cost overruns is sometimes explained by pointing to increased productivity and new efficiencies as an organization's process matures. Yet another explanation is that by imposing phases, documentation, and other artifacts of "increased maturity" (at least in the world of CMM), projects are simply more predictable. After all, it's easier to come up with a more accurate cost estimate for a product whose behavior is completely specified than for a product whose behavior isn't specified at all.
However, the Kruger and Dunning paper provides an alternate explanation that bears some thought. More mature organizations might avoid overruns for reasons other than greater productivity or efficiencies, or even imposition of additional process steps or documentation. Rather, they might be eliminating overruns because, in the process of increasing their maturity, these organizations simply became more aware of what they know.
Implications for Software Development
If it were true that the important thing is gaining knowledge, the implications would be tremendous. Rather than actually changing their processes, organizations could simply study their processes to gain more knowledge. This doesn't mean that an organization wouldn't change their process if it made sense, but it does imply that it isn't simply changing the process but rather the deeper understanding of how their organization works that yields the gain.
This is consistent with a variety of process improvement movements. For instance, Watts Humphrey's Personal Software Process has programmers begin by recording how they spend their time. This self-recorded information plays a critical role in the PSP framework. Obviously, programmers gain a better understanding of what they do and how they do it if they actually track how they're spending their time.
Likewise, we're admonished to begin every process improvement effort with a baseline measurement. If my conjecture is true, perhaps we can gain much of the process improvement benefit (at least with regard to schedules and predictability) by stopping there.
I want to make clear that I'm not advocating the wholesale elimination of your process improvement efforts. However, a great deal may stand to be gained by examining the relative value of different aspects of your process improvement efforts.
WHAT DO YOU THINK?
I'd like to hear your thoughts on this issue. Is it possible to gain the benefits of process improvement without really changing the process? What's the value of knowledge? Does anyone have any good stories about clueless organizations or colleagues?
Please write me at email@example.com.
I'm doing just fine, thank you.