This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
On the Use of Testability Measures for Dependability Assessment
February 1996 (vol. 22 no. 2)
pp. 97-108

Abstract—Program "testability" is, informally, the probability that a program will fail under test if it contains at least one fault. When a dependability assessment has to be derived from the observation of a series of failure-free test executions (a common need for software subject to "ultra-high reliability" requirements), measures of testability can—in theory—be used to draw inferences on program correctness (and hence on its probability of failure in operation). In this paper, we rigorously investigate the concept of testability and its use in dependability assessment, criticizing, and improving on, previously published results.

We first give a general descriptive model of program execution and testing, on which the different measures of interest can be defined. We propose a more precise definition of program testability than that given by other authors, and discuss how to increase testing effectiveness without impairing program reliability in operation. We then study the mathematics of using testability to estimate, from test results: 1) the probability of program correctness and 2) the probability of failures. To derive the probability of program correctness, we use a Bayesian inference procedure and argue that this is more useful than deriving a classical "confidence level." We also show that a high testability is not an unconditionally desirable property for a program. In particular, for programs complex enough that they are unlikely to be completely fault-free, increasing testability may produce a program which will be less trustworthy, even after successful testing.

[1] E.N. Adams, "Optimizing preventive service of software products," IBM J. of Research and Development, vol. 28, no. 1, pp. 2-14, Jan. 1984
[2] P.E. Amman, S.S. Brilliant, and J. Knight, "The effect of imperfect error detection on reliability assessment via life testing," IEEE Trans. on Software Engineering, vol. 20, no. 2, pp 142-148, Feb. 1994.
[3] A. Avizienis, "The N-version approach to fault-tolerant software," IEEE Trans. on Software Engineering, vol. 11, no.12, pp. 1491-1501, 1985.
[4] R. Bache and M. Müllerburg, "Measures of testability as a basis for quality assurance," Software Engineering J., vol. 5, pp. 86-92, Mar. 1990.
[5] P.G. Bishop and F.D. Pullen, "Error masking: a source of failure dependency in multi-version programs," Proc. First IFIP Working Conf. Dependable Computing for Critical Applications,Santa Barbara, Calif.: Springer-Verlag, 1989, pp. 53-73.
[6] A. Bondavalli, S. Chiaradonna, F. Di Giandomenico, and L. Strigini, "Dependability analysis of iterative fault-tolerant software considering correlation," B. Randell, J.-C. Laprie, H. Kopetz, and B. Littlewood, eds., Predictably Dependable Computing Systems, Esprit Basic Research Series. Springer-Verlag, 1995, pp. 459-471.
[7] R.S. Freedman, "Testability of software components," IEEE Trans. Software Engineering, vol. 17, no. 6, pp. 553-564, June 1991.
[8] R.G. Hamlet, “Probable Correctness Theory,” Information Processing Letters, vol. 25, pp. 17–25, Apr. 1987.
[9] D. Hamlet, "Are we testing for true reliability?" IEEE Software, pp. 21-27, July 1992.
[10] D. Hamlet and R. Taylor, "Partition Testing Does Not Inspire Confidence," IEEE Trans. Software Eng., vol. 16, pp. 1,402-1,412, Dec. 1990.
[11] D. Hamlet and J. Voas, "Faults on its sleeve: Amplifying software reliability testing," 1993 Int'l Symp. Software Testing and Analysis (ISSTA),Cambridge, Mass., pp. 89-98, June28-30, 1993, ACM SIGSOFT Software Engineering Notes, vol. 18 no. 3, July 1993.
[12] H. Hecht, "Fault-tolerant software," IEEE Trans. on Reliability, vol. 28, no. 3, pp. 227-232, Aug. 1979.
[13] W.E. Howden and Y. Huang, "Analysis of testing methods using failure rate and testability models," Tech. Report CSE, Univ. of Calif. at San Diego, 1993.
[14] W.E. Howden and Y. Huang, "Software trustability," Proc. Fifth Int'l Symp. Software Reliability Engineering,Monterey, Calif., pp. 143-151, Nov.6-9, 1994.
[15] ISO/IEC 9126, "Information technology—software product evaluation—quality characteristics and guidelines for their use," 1st edition, 1991.
[16] Dependability: Basic Concepts and Associated Terminology, J.C. Laprie, ed., vol. 5. 265 pp., Dependable Computing and Fault-Tolerant Systems series, Wien: Springer-Verlag, 1992.
[17] N.G. Leveson, "Safety assertions for process-control systems," Proc. 13th Int'l Symp. Fault-Tolerant Computing,Milano, Italy, pp. 232-40, 1983.
[18] B. Littlewood and L. Strigini,“Validation of ultra-high dependability for software-based systems,” Comm. ACM, vol. 36, no. 11, pp. 69-80, Nov. 1993.
[19] Software Fault Tolerance. M.R. Lyu, ed., 337 pp., Trends in Software series, John Wiley&Sons, 1995.
[20] C. Maunder, The Board Designer's Guide to Testable Logic Circuit. Addison-Wesley, 1992.
[21] W.M. Miller,L.J. Morell,R.E. Noonan,S.K. Park,D.M. Nicol,B.W. Murrill,, and J.M. Voas,“Estimating the probability of failure when testing reveals nofailures,” IEEE Trans. Software Engineering, vol. 18, no. 1, pp. 33-43, 1992.
[22] J.D. Musa, "Operational Profiles in Software Reliability Engineering," IEEE Software, vol. 10, no. 2, pp. 14-32, 1993.
[23] D.L. Parnas, A.J. van Schouwen, and S.P. Kwan, "Evaluation of safety-critical software," Comm. ACM, vol. 33, no.6, pp. 636-648, June 1990.
[24] C. Rabejac, "On-line software error detection by executable assertions: From theory to practice," Proc. 14th Int'l Conf. Computer Safety, Reliability and Security SAFECOMP 95,Belgirate, Italy, 1995.
[25] L. Strigini, "Engineering judgment in reliability and safety and its limits: What can we learn from research in psychology?" SHIP project Tech. Report T030, July 1994.
[26] J. Voas, “PIE: A Dynamic Failure-Based Technique,” IEEE Trans. Software Eng., vol. 18, no. 8, pp. 717–727, Aug. 1992.
[27] J.M. Voas, C.C. Michael, and K.W. Miller, "Confidently assessing a zero probability of software failure," High Integrity Systems, vol. 1, no. 3, pp. 269-275, 1995.
[28] J.M. Voas and K.W. Miller, "Improving the software development process using testability research," Proc. Third Int'l Symp. Software Reliability Engineering, Research Triangle Park, North Carolina, pp. 114-121, Oct.7-10, 1992.
[29] J.M. Voas and K.W. Miller, "Putting assertions in their place," Proc. Fifth Int'l Symp. Software Reliability Engineering,Monterey, Calif., pp.152-157, Nov.6-9, 1994.
[30] J.M. Voas and K.W. Miller, "Software testability: The new verification," IEEE Software, pp. 17-28, May 1995.
[31] T.W. Williams and K.P. Parker, "Design for testability—a survey," Proc. IEEE, vol. 71, no. 1, pp. 98-112, Jan. 1983.

Index Terms:
Bayesian inference, error, fault, failure, reliability assessment, software testing, testability, test oracle, ultra-high reliability.
Citation:
Antonia Bertolino, Lorenzo Strigini, "On the Use of Testability Measures for Dependability Assessment," IEEE Transactions on Software Engineering, vol. 22, no. 2, pp. 97-108, Feb. 1996, doi:10.1109/32.485220
Usage of this product signifies your acceptance of the Terms of Use.