The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2007 vol.24)
pp: 24-30
Published by the IEEE Computer Society
Ron Jeffries , Independent Consultant
Grigori Melnik , University of Calgary
ABSTRACT
Test-driven development is a discipline of design and programming where every line of new code is written in response to a test the programmer writes just before coding. This special issue of IEEE Software includes seven feature articles on various aspects of TDD and a Point/Counterpoint debate on the use of mock objects in applying it. The articles demonstrate the ways TDD is being used in nontrivial situations (database development, embedded software development, GUI development, performance tuning), signifying an adoption level for the practice beyond the visionary phase and into the early mainstream. In this introduction to the special issue on TDD, the guest editors also summarize selected TDD empirical studies from industry and academia.
Programmers! Cast out your guilt! Spend half of your time in joyous testing and debugging! Thrill to the excitement of the chase! Stalk bugs with care, and with method, and with reason. Build traps for them. Be more artful than those devious bugs and taste the joys of guiltless programming! —Boris Beizer, 1983
Test-driven development is a discipline of design and programming where every line of new code is written in response to a test the programmer writes just before coding. As TDD practitioners, we think of what small step in capability would be a good next addition to the program. We then write a test specifying just how the program should invoke that capability and what its result should be. The test fails, showing that the capability isn't already present. We implement the code that makes the test pass and then verify that all prior tests are still passing. Finally, we review the code as it now stands, improving the design as we go in an activity called refactoring. Then we repeat the process, devising another test for another small addition to the program.
As we follow this simple cycle, shown in figure 1, the program grows into being and the design evolves with it. At the beginning of every cycle, the intention is for all tests to pass except the new one, which is "driving" the new code development. At the end of the cycle, the programmer runs all the tests, ensuring that each one passes and hence that every planned feature of the code still works.


Figure 1. The test-driven-development step cycle: design a failing test, implement code to pass the test, and improve the design via refactoring.

TDD is a design and programming activity, not a testing activity per se. Its testing aspect is largely confirmatory, through the regression suite it produces. Professional testers must still perform investigative testing. (The potential for confusion is spawning new terms for the discipline, such as behavior-driven development 1 and example-driven development. 2)
This special issue of IEEE Software includes seven feature articles on various aspects of TDD and a Point/Counterpoint debate on the use of mock objects in applying it. Notably, these articles demonstrate the ways TDD is being used in nontrivial situations (database development, embedded software development, GUI development, performance tuning). This signifies an adoption level for the practice beyond the visionary phase and into the early mainstream.
Alleviating fear
In practice, of course, even the best programmers make mistakes. TDD's growing collection of comprehensive tests (the regression suite) tends to detect these problems. No scheme is perfect, but TDD practitioners seem to experience a reduction in defects shipped, plus much faster problem detection.
Anyone who's worked on legacy software recognizes the situation where a system continues to function but becomes more and more outdated until, at some point, it turns into a house of cards. No one wants to touch it because even a minor code change will likely lead to an undesired side effect. With TDD, developers organically develop a test suite while building their applications. This provides a safety net for the whole system, offering reasonable confidence that no part of the code is broken. As a result, TDD helps alleviate the fear of changing the code.
In the past, most developers programmed by first writing code and then testing it. We often performed the tests manually and often gave only a cursory look at whether we had broken any past tests. In spite of what now seems like careless work, we were always surprised when someone found a bug in our code. We might have chalked it up to "just a mistake" or vowed to try harder and be more careful next time. Those tricks rarely worked.
With TDD, things are different. Automated tests specify and constrain each functional bit of the program. While these tests tend to prevent errors and detect them when they do occur, when an error does come up, our best response is to write the test that was missing—the test that would have prevented the defect.
Programmers using TDD become justly confident in the code. As we become more confident, we can relax more as we work. Less stressed, we can focus more on quality because we're keeping fewer balls in the air. We become practiced in thinking about what might not work, at testing whether it does, and at making it work.
Joyous programming
The TDD approach extends the assertion Boris Beizer made in 1983: "The act of designing tests is one of the most effective bug preventers known." 3 As a practice, TDD first appeared as part of the Extreme Programming discipline, described in Kent Beck's Extreme Programming Explained, which came out in 1999. In 2002, Beck released Test-Driven Development: By Example, and Dave Astels followed soon after with Test-Driven Development: A Practical Guide. More books appeared, covering various aspects of the technique, specific tools, and project experiences (see the " Recommended Books" sidebar). TDD tools now exist for almost every computer language you can imagine, from C++ to Visual Basic, all the major scripting languages, and even some of the more exotic languages—current and past.
TDD has caught the attention of a large software development community that finds it to be a good, fast way to develop reliable code—and many of us find it to be an enjoyable way to work. It embodies elements of design, testing, and coding in a cyclical style based on one fundamental rule: never write a line of code except what's necessary to make the current test pass. The process might sound tedious in the telling, but the practice is rhythmic, quite pleasant, and productive. The swing from test to code to test occurs as frequently as every five or 10 minutes. It's been compared to a waltz, to the smooth grace of skating, and to the seemingly effortless movements of a yin-style martial art. As in all these analogous situations, the practitioner is fully engaged and concentrating, while the work just seems to flow. And like these other arts, the only way to really understand TDD is to practice it.
System-level TDD
Applied at a higher level, TDD is known as executable acceptance TDD 4 or storytest-driven development, 5 and it helps with requirements discovery, clarification, and communication. Customers, domain experts, or analysts specify tests before the features are implemented. Once the code is written, the tests serve as executable acceptance criteria. In this issue, Jennitta Andrea makes a case for better acceptance testing tools in her article, "Envisioning the Next Generation of Functional Testing Tools" .
Each test amounts to the first use of a planned new capability. This helps the developer focus on the code in actual use, not just as implemented. We can often improve a new capability's design when we have a chance to see what we're creating from the first user's viewpoint. Each test makes us think concretely about how the proposed new feature will behave. What are suitable inputs? What behavior will be executed? How will we know what happened? When we turn to writing the code a few minutes later, the concrete example helps us focus on what the code needs to do.
The process is self-correcting. If the tests are too simple, which is rare, the workflow will feel choppy and without challenge. This will encourage the developer to take larger bites. On the other hand, if the tests are too difficult, the longer time between successfully passed tests will alert us that we might be off track.
Once developers gain some skill in TDD, they commonly report less stress during development, better requirements understanding, lower defect insertion rates, less rework, and, as a result, faster production of higher-quality code. Once "test infected," as TDD aficionados are called, a developer rarely wants to go back to the old ways.
TDD practice has a special value as part of agile methods, which are all characterized by iterative delivery of increasingly capable system versions in short cycles—usually fixed lengths of a couple of weeks to a month. At the beginning, a simple architecture and simple design are sufficient to support the system's capability. As it grows, however, the architecture and design need continuous improvement. Agile practitioners might or might not have the complete design in mind from the beginning. Either way, to deliver working system versions every few weeks, they must grow the complete design incrementally, not all at once.
Improving the design incrementally is the refactoring step in figure 1. It brings the whole design back into alignment—now just a little bit bigger and better. Changing the design in continual small steps is a good thing, in that we can deliver tangible features as we go along. But frequent design changes also carry the risk that we'll break something that used to work.
The tests we write with TDD have been built, one by one, to cause some new software property to exist and to show that it works. So, as we refactor, we can run all the tests to verify that everything that should work, in fact, still does work. This makes TDD a powerful asset to incremental software development. It becomes a rule of software development hygiene. Robert C. Martin argues for that in his article, "Professionalism and Test-Driven Development."
The state of TDD research
TDD also intrigues the research community, and a growing number of studies have investigated its effects. Tables 1 and 2 reflect the current state of TDD research, summarizing the productivity and quality impacts of industry and academic work, respectively. 6-23 The results are sometimes controversial (more so in the academic studies). This is no surprise, given incomparable measurements and the difficulty in isolating TDD's effects from many other context variables. In addition, many studies don't have the statistical power to allow for generalizations. So, we advise readers to consider empirical findings within each study's context and environment. We also invite more researchers to methodically investigate TDD practice and report on its effects, both positive and negative.

Table 1. A summary of selected empirical studies of test-driven development: industry participants*


Table 2. A summary of selected empirical studies of TDD: academic participants*


All researchers seem to agree that TDD encourages better task focus and test coverage. The mere fact of more tests doesn't necessarily mean that software quality will be better, but the increased programmer attention to test design is nevertheless encouraging. If we view testing as sampling a very large population of potential behaviors, more tests mean a more thorough sample. To the extent that each test can find an important problem that none of the others can find, the tests are useful, especially if you can run them cheaply.
TDD is also making its way to university and college curricula. The IEEE/ACM 2004 guidelines for software engineering undergraduate programs includes test-first as a desirable skill. 24 Educators report success stories when using TDD in computer science programming assignments. In this issue, Bas Vodde and Lasse Koskela describe an effective exercise for introducing TDD to novices—practitioners or students—in their article, "Learning Test-Driven Development by Counting Lines".
Other articles in this special issue give you a taste of TDD's use in diverse and nontrivial contexts: control system design (see Thomas Dohmke and Henrik Gollee, "Test-Driven Development of a PID Controller,"), GUI development (Alex Ruiz and Yvonne Wang Price, "Test-Driven GUI Development with TestNG and Abbot,"), and database development (Scott W. Ambler, "Test-Driven Development of Relational Databases,"). In addition, Michael J. Johnson, Chih-Wei Ho, E. Michael Maximilien, and Laurie Williams inspect the aspect of incorporating performance testing in TDD. In Point/Counterpoint, Steve Freeman and Nat Pryce debate Joshua Kerievsky on the role of mock objects in test-driving code.
Conclusion
TDD is becoming popular across all sizes and kinds of software development projects. A Cutter Consortium survey of companies about various software process improvement practices identified TDD as the practice with the second-highest impact on project success (after code inspections). 25 Of course, like any other programming tool or technique, TDD is no silver bullet. However, it can help you become a more effective and disciplined developer—fearless and joyful, too.
Happy reading!
We thank the 32 groups of authors who responded to our call for papers. Selecting seven articles from this pool would have been impossible without reviewers who contributed their expertise and effort. Our profound gratitude goes to all of them.

References

Ron Jeffries is an independent Extreme Programming author, trainer, coach, and practitioner and proprietor of www.xprogramming.com, one of the longest-running and certainly the largest one-person site on XP, comprising over 200 articles at this time. His research interests center on agile software development. He's one of the 17 original authors and signatories of the Agile Manifesto ( www.agilemanifesto.org). He has master's degrees in mathematics and in computer and communication science. Contact him at ronjeffries@acm.org.

Grigori Melnik is a software engineer, researcher, and educator, currently affiliated with the University of Calgary and SAIT Polytechnic. His research interests include empirical evaluation of agile methods, executable acceptance-test-driven development, domain-driven design, e-business software engineering, the Semantic Web, and distributed cognition in software teams. He's the research chair of the Agile 2007 conference and the program academic chair of Agile 2008. Contact him at melnik@cpsc.ucalgary.ca.
18 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool