Moderator: Tom Wissink - LM Fellow
Cem Kaner is Professor of Software Engineering at the Florida Institute of Technology and Executive Vice President of the Association for Software Testing. With doctorates in psychology and law, Kaner’s career theme is the satisfaction and safety of software customers and workers, a theme he has worked from several perspectives: programmer, tester, writer, teacher, human factors analyst, user interface designer, manager (tech pubs; testing; software development; director-level), software salesperson, organization development consultant, software development/management consultant, and as an attorney focusing on the law of software quality. Kaner is senior author of three books -- Lessons Learned in Software Testing; Bad Software; and Testing Computer Software. He’s also lead developer of the BBST video course series.
Software Testing as a Quality-Improvement Activity
Testing is often characterized as a relatively mindless activity that should be formalized, standardized, routinized, endlessly documented, fully automated, and most preferably, eliminated. This webinar presents the contrasting view: good testing is challenging. cognitively complex, and customized to suit the circumstances of the individual project. The webinar presents testing as an empirical, technical investigation of a software product or service, conducted to provide quality-related information to stakeholders. A fundamental challenge of testing is that cost/benefit analysis underlies every decision. Two tests are distinct if one can reveal an error that the other would miss. The population of distinct tests of any nontrivial program is infinite. And so any decision to do X is also a decision to not do the things that could have been done if the resources hadn't been spent on X. The question is not whether an activity is worthwhile. It is whether this activity is so much more worthwhile than the others that it would be a travesty not to do it (or at least, a little bit of it). For example, system-level regression-test automation might allow us to run the same tests thousands of times at low cost, but after the first few repetitions, how much do we learn from the typical regression test? What if instead, we spent the regression-test resources on new tests, addressing new risks (other ways the program could fail)? Quality is not quantity. If our measure is amount (or value) of quality-related information for the stakeholders, what improves efficiency? Do we cover more ground by running in place very quickly, or by moving forward more slowly? Under what conditions do which types of automation improve testing effectiveness or efficiency? Another fundamental challenge of testing is that quality is subjective—as Weinberg put it, "Quality is value to some person." Meeting a specification might trigger someone's duty to pay for a program, but if the program doesn't actually meet their needs, preferences, and expectations, they won't like it, won't want to use it, and certainly won't recommend it. How should we design a testing effort to expose the potential dissatisfiers for each stakeholder?