This Article 
 Bibliographic References 
 Add to: 
Some Conservative Stopping Rules for the Operational Testing of Safety-Critical Software
November 1997 (vol. 23 no. 11)
pp. 673-683

Abstract—Operational testing, which aims to generate sequences of test cases with the same statistical properties as those that would be experienced in real operational use, can be used to obtain quantitative measures of the reliability of software. In the case of safety critical software it is common to demand that all known faults are removed. This means that if there is a failure during the operational testing, the offending fault must be identified and removed. Thus an operational test for safety critical software takes the form of a specified number of test cases (or a specified period of working) that must be executed failure-free. This paper addresses the problem of specifying the numbers of test cases (or time periods) required for a test, when the previous test has terminated as a result of a failure. It has been proposed that, after the obligatory fix of the offending fault, the software should be treated as if it were completely novel, and be required to pass exactly the same test as originally specified. The reasoning here claims to be conservative, inasmuch as no credit is given for any previous failure-free operation prior to the failure that terminated the test. We show that, in fact, this is not a conservative approach in all cases, and propose instead some new Bayesian stopping rules. We show that the degree of conservatism in stopping rules depends upon the precise way in which the reliability requirement is expressed. We define a particular form of conservatism that seems desirable on intuitive grounds, and show that the stopping rules that exhibit this conservatism are also precisely the ones that seem preferable on other grounds.

[1] M. Abramowitz and I.A. Stegun, Handbook of Mathematical Functions.New York: Dover, 1970.
[2] E.N. Adams, "Optimizing Preventive Maintenance of Software Products," IBM J. Research and Development, vol. 28, no. 1, pp. 2-14, 1984.
[3] N.L. Johnson and S. Kotz, Distributions in Statistics: Discrete Distributions.New York: John Wiley&Sons, 1969.
[4] B. Littlewood and L. Strigini,“Validation of ultra-high dependability for software-based systems,” Comm. ACM, vol. 36, no. 11, pp. 69-80, Nov. 1993.
[5] W.M. Miller,L.J. Morell,R.E. Noonan,S.K. Park,D.M. Nicol,B.W. Murrill,, and J.M. Voas,“Estimating the probability of failure when testing reveals nofailures,” IEEE Trans. Software Engineering, vol. 18, no. 1, pp. 33-43, 1992.
[6] D.L. Parnas, G.J.K. Asmis, and J. Madey, "Assessment of Safety-Critical Software in Nuclear Power Plants," Nuclear Safety, vol. 32, no. 2, pp. 189-198, 1991.

Index Terms:
Safety-critical software, software reliability, operational testing, statistical testing, testing stopping rule.
Bev Littlewood, David Wright, "Some Conservative Stopping Rules for the Operational Testing of Safety-Critical Software," IEEE Transactions on Software Engineering, vol. 23, no. 11, pp. 673-683, Nov. 1997, doi:10.1109/32.637384
Usage of this product signifies your acceptance of the Terms of Use.