Issue No. 11 - November (1993 vol. 19)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/32.256855
<p>The fault exposure ratio, K, is an important factor that controls the per-fault hazard rate, and hence, the effectiveness of the testing of software. The authors examine the variations of K with fault density, which declines with testing time. Because faults become harder to find, K should decline if testing is strictly random. However, it is shown that at lower fault densities K tends to increase. This is explained using the hypothesis that real testing is more efficient than strictly random testing especially at the end of the test phase. Data sets from several different projects (in USA and Japan) are analyzed. When the two factors, e.g., shift in the detectability profile and the nonrandomness of testing, are combined the analysis leads to the logarithmic model that is known to have superior predictive capability.</p>
fault exposure ratio; per-fault hazard rate; software testing; detectability profile; logarithmic model; predictive capability; software reliability; fault density; program testing; software reliability
P. Srimani, Y. Malaiya and A. von Mayrhauser, "An Examination of Fault Exposure Ratio," in IEEE Transactions on Software Engineering, vol. 19, no. , pp. 1087-1094, 1993.