Issue No. 04 - April (1994 vol. 20)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/32.277579
<p>Developers of large software systems must decide how long software should be tested before releasing it. A common and usually unwarranted assumption is that the code remains frozen during testing. We present a stochastic and economic framework to deal with systems that change as they are tested. The changes can occur because of the delivery of software as it is developed, the way software is tested, the addition of fixes, and so on. Specifically, we report the details of a real time trial of a large software system that had a substantial amount of code added during testing. We describe the methodology, give all of the relevant details, and discuss the results obtained. We pay particular attention to graphical methods that are easy to understand, and that provide effective summaries of the testing process. Some of the plots found useful by the software testers include: the Net Benefit Plot, which gives a running chart of the benefit; the Stopping Plot, which estimates the amount of additional time needed for testing; and diagnostic plots. To encourage other researchers to try out different models, all of the relevant data are provided.</p>
program testing; software metrics; software reliability; computer graphics; large software systems; changing code; economic framework; real time trial; software testers; Net Benefit Plot; Stopping Plot; diagnostic plots; optimal stopping rule; graphical methods; software metrics; statistical inference; software reliability model; software fault detection
S. Dalal and A. McIntosh, "When to Stop Testing for Large Software Systems with Changing Code," in IEEE Transactions on Software Engineering, vol. 20, no. , pp. 318-323, 1994.