The Community for Technology Leaders


Pages: pp. 88-91


Alex Abacus

Test Driven Development: A Practical Guide by David Astels, Prentice Hall, 2003, ISBN 0-13-101649-0, 592 pp., US$39.99.

Test Driven Development: A Practical Guide succeeds as a mechanic's guide to test-driven development or test-first development tools. However, I don't recommend it as a TDD introduction, motivation, or guide for designing good, sufficient tests.

The book explains frameworks that support TDD, benefits and problems of using Mocks in test code, GUI development using TDD, and how to keep test code and application code integrated yet separate. The book's main part (230 pages) presents a project using TDD. The project tools include Java, JUnit (the Java implementation of the xUnit framework that supports TDD), xmlUnit, Swing GUI, Jemmy, MockObjects, MockMaker, and EasyMock. An overview introduces other tools:

  • JUnit extensions: JUnitPerf, Daedalos, Gargoyle;
  • Test coverage: Jester, NoUnit, Clover;
  • IDE: Eclipse, IDEA; and
  • xUnit family members for other languages: Ruby, SmallTalk, C++, C#/.NET, Python, Visual Basic.

The project is a simple interactive application that edits and displays a list of notes about movies, with the title, genre, multiple reviewers' 0–5 star ratings, and text reviews. It includes 106 tests, and the book describes the evolution of the application's source code test by test. User requirement stories drive the project, and each step in the project's evolution satisfies the next requirement story. The steps occur in sequence as if we don't know the requirements that will follow, simulating a project with expanding requirements. New requirements often force major design changes.

The project is a long read. The reader can

  • read only the code's highlights to understand the concepts;
  • read all code to understand the TDD process mechanics;
  • compile and run the source code on a computer to feel all the road bumps.

I found the project useful but repetitive. I'd prefer a shorter project that illustrated the widest variety of approaches in a minimal number of evolutionary steps.

I was even less impressed by the book's arguments for using TDD. I expected the book to say something about designing good and sufficient tests, but the project tests are neither systematic nor exhaustive; for example, numerous tests check that a movie title isn't added twice, but no tests check whether two titles differ only in capitalization.

Furthermore, to refute a popular misconception—that you don't need modeling when following TDD—author David Astels includes an Agile Modeling appendix, but he doesn't use modeling in the project.

I came to this book already convinced that you should develop and maintain test code along with application code. I don't even have a problem with the advice that, in general, you should write tests before the application code. However, I feel that some of this book's more specific advice leads to absurd practices.

Astels advises repeatedly, "Do the simplest thing that could possibly satisfy the test." In the project, many classes have a size method for a collection. The first test is an empty collection. The method satisfying this test is always public int size() { return 0; }.

The size() method always evolved in the same way to satisfy other tests. This soon became tiresome and reminded me of a quote attributed to Albert Einstein: "Make everything as simple as possible, but no simpler."

The attitude that "no code should ever be written until it is required by a test" threatens to turn TDD into a mockery of an otherwise useful concept. If we also follow the advice that "at any time we should have only one test failing," we must proceed by adding only one test at a time. Deliberately ignoring known requirements that we must satisfy later generates more work overall and often provides no benefit. This development style explains sentences such as, "We faked it before, now we have to do a better job of faking it." I suspected that this sentence would lead to sloppy design and coding even before I found such examples in the project.

At one point in the project, you need to calculate an integer average rating from several integer ratings. You can add or delete individual ratings at any time. First, we test adding ratings. The first attempt at faking it accumulates the average by adding the next rating to the current average and dividing by two. Simple, right? But we don't yet have a test for deleting a rating. If we did, we might find that "unaccumulating" the average is not so simple.

Faking the code fails the test for adding ratings, but the book attributes the failure to integer arithmetic "rounding errors," not to an incorrect algorithm. Next, we select a "better fake" from several candidates, including this incorrect algorithm converted to a floating point. It just happens that the other two candidates are correct algorithms. We select one, not because it's correct, but because it's judged "the simplest" of the three candidates.

I respect test-driven development, but I have reservations about "fake first development."

Alex Abacus is a software developer. Contact him at


Mike Barker

Agile & Iterative Development: A Manager's Guide by Craig Larman, Addison-Wesley Professional, 2004, ISBN 0-13-111155-8, 342 pp., US$34.99.

It might seem absurd, but if you buy Agile & Iterative Development: A Manager's Guide by Craig Larman, I recommend that the first thing you do is cross out the subtitle. Take a black marker to the black cover and wipe out the words "A Manager's Guide."

Why? The subtitle is misleading, and you need to make sure it deceives neither you nor others. This isn't a book for project managers, functional managers, or senior managers. It provides little discussion or detail about how to manage an agile project or insight as to why, as a manager, you'd want to use these methods. However, it does provide an overview of iterative and agile methods, along with materials to help make the case for using these methods.

Who'd use this book? If you need to make a case to managers for using iterative and agile methods on a project, this might be the right book for you. The introductory sections provide explanatory materials, while the "Motivation" and "Evidence" chapters provide plenty of support for these methods. If you're looking for an introduction to iterative and agile methods written by a practitioner, this might also be a good book for you. It summarizes the key practices of Scrum, Extreme Programming (XP), Unified Process (UP), and Evo. Practice tips and an FAQ chapter help answer specific questions. However, if you're planning to use iterative and agile methods on a project, you'll need a more in-depth treatment from a book, training workshop, or mentor.


The book falls into several big chunks. Larman first gives an overview of agile methods. He also includes a short story describing a project using agile methods. The next two chapters lay out reasons and research results justifying agile methods' use. Four chapters then describe the four specific approaches mentioned earlier. Finally, practice tips and frequently asked questions round out the book.

I wish Larman had spent more time organizing the material to help the reader. I appreciated the "paper hyperlinks" in the margins (notes indicating other pages that discuss terms). However, I found flipping around distracting, and often, he didn't provide links. For example, on page 173 in the UP chapter, he says, "the UP organizes iterations within four phases" and then describes two: elaboration and construction. On page 180, he finally defines the four phases as inception, elaboration, construction, and transition. Including the missing two in the introductory paragraph or providing a pointer to the later information would have been helpful.

A word of caution

In the FAQ section, Larman makes two points that I thought he could have brought up earlier and expanded upon. First, he notes "that iterative methods are not fundamentally for improving productivity or delivery speed or reducing defects, although there is research showing correlations. Rather, they are less ambitiously for reducing the risk of failure and increasing the probability of creating something of value that the stakeholders wanted. Given that recent data shows that 23% of projects fail, this is no small feat." Given Larman's evident bias in favor of iterative methods, this is a useful caution about claiming too much for them.

Also, he explains, "for managers, perhaps the biggest shift—at least with methods such as XP and Scrum—is to step back and avoid assigning tasks and directing work, not being the taskmaster. Recall that in these methods self-directed teams and volunteering for work is important. The manager's role is to reinforce the project vision and company goals, manage risks, communicate the iteration goals, remove blocks, provide resources, and track progress." This was Larman's clearest explanation about what managers do with agile methods, and I wish he'd included more about how they can do these things in the agile world.

Larman is both a practitioner and proponent of iterative methods. If you're comfortable with plan-driven methods, you might find his litany of "software is new product development" and assertions that plan-driven approaches are bad to be irritating, antagonistic, or even hostile. I think he's expressing the kind of polarization and overstatement that early adapters often fall into. I suggest that you ignore the provocations and read the book for an expert practitioner's observations.

So, should you buy it? If you're looking for a quick introduction to agile methods and reasons to use them, the first part of this book might be worth it. The later parts will provide you with a little extra depth about four approaches and their use, which you can supplement later. If you're trying to figure out how to manage a project using agile or iterative methods, you'll probably want to look further.

And if you buy the book, cross out the subtitle.

Mike Barker is a visiting scholar at Nara Institute of Science and Technology. Contact him at


Paul Freedman

Software by Numbers: Low-Risk, High-Return Development by Mark Denne and Jane Cleland-Huang, Prentice Hall, 2003, ISBN 0-131-40728-7, 208 pp., US$39.99.

At the heart of Software by Numbers is the recognition that software development is all about creating business value. As a result, financial constraints, such as return on investment and financial risk considerations, should govern software development funding.

Mark Denne and Jane Cleland-Huang suggest that software development typically begins only after funding decisions are made—that is, after senior managers weigh the cost of developing all the anticipated new functionality against the anticipated new product revenue. They then point out that software products are special, perhaps even unique, because companies can deliver customer value incrementally in the form of ongoing upgrades that simply add new self-contained features.

This suggests, then, that by carefully designing the product architecture, you can develop product functionality in a way that reduces overall development costs by delivering early versions that have incomplete functionality but generate real revenue. The emphasis, of course, is on generating revenue—how much and how soon—so that later product development can be self-funded. Denne and Cleland-Huang call this the incremental funding methodology (IFM).

Linking technical and business concerns

To set the stage, they define the minimum marketable feature, a subset of final product functionality that delivers significant, quantifiable customer value. Associating a number with that customer value—that is, associating potential revenue with delivering the MMF to the customer—begins the "software by numbers" linkage. Then the authors discuss how best to identify MMFs and, especially, how best to order the associated product functionality development.

But because the MMFs describe value to the customer, developers must first associate them with software elements (components, modules, and so on) as defined by the product architecture. In this way, incrementally delivering customer value in the form of new MMFs occurs in the context of incrementally developing new architectural elements.

Of course, once Denne and Cleland-Huang add real-world complications, the scheduling problem for developing complete product functionality becomes NP-hard. Some complications might be precedence constraints associated with developing the architectural elements, delivery constraints associated with getting MMFs to the customer, and the impossibility of estimating customer value in dollars for some MMFs because it's intangible (for example, an increase in customer loyalty). So, the authors develop heuristics to create good but possibly suboptimal MMF orderings.

Finally, Denne and Cleland-Huang describe how to wed IFM to software process methods, especially the Rational Unified Process and Extreme Programming. XP is particularly well suited to IFM's emphasis on scheduling to optimize financial considerations associated with product development. Such considerations include "time to self-funding," when revenues from early, incomplete versions balance ongoing development activities, and "time to breakeven," when all development costs are recovered.

A few caveats

But apart from the new vocabulary (MMF, IFM, and so on), is something really new going on here? Aren't all software product companies bringing to market early versions that do enough for their customers, and then using that revenue to create improved versions in the form of software upgrades that do even more?

In addition, software development at product companies must take into account associated factors that fall outside IFM. For example, if you have a well-established software product line, it's not possible to consider alternative architectures and hence alternative architectural elements. Moreover, if you're hurrying something to market because your competitor has just released something new, you might choose to forego any MMF ordering analysis because your competitors have already defined the "what to do" or the new MMFs to develop.

Situating the book's merits

So does that mean that Software by Numbers isn't worth reading? Absolutely not! If nothing else, the focus on the financial considerations associated with software product development in the early chapters is worth reading, along with the discussions of net present value, discounted cash flow, and MMF value. And I also found the association with XP interesting.

To place the book's primary contribution in context, however, we must take a closer look at IFM's origins. Denne's experience at Sun Microsystems, working on large-systems integration and application development projects on behalf of customers, certainly contributed to the methodology's development.

So if you, too, are in the business of work for hire—if your company works closely with your customer to develop something new—IFM can indeed provide a solid context for sorting out just how the work ought to get done and (in particular) get paid for. This is because MMF delivery becomes a natural payment milestone. Such milestones also serve as natural moments in product development to review dollar estimates associated with costs and revenues, and possibly to reorder what you still have to do.

Paul Freedman is president of Simlog. Contact him at;

Online Reviews

  • "A Comprehensive Guide to Parallel Computing" by Art Sedighi. A review of Introduction to Parallel Computing: Design and Analysis of Algorithms, 2nd ed., by Ananth Grama, George Karypis, Vipin Kumar, and Anshul Gupta
  • "Performing on Metrics" by Harekrishna Misra. A review of Software Metrics: A Guide to Planning, Analysis, and Application, by C. Ravindranath Pandian

75 ms
(Ver 3.x)