Issue No.01 - Jan. (1986 vol.12)
Harlan D. Mills , IBM Corporation, Federal Systems Division, 6600 Rockledge Drive, Bethesda, MD 20817
The accepted approach to software development is to specify and design a product in response to a requirements analysis and then to test the software selectively with cases perceived to be typical to those requirements. Frequently the result is a product which works well against inputs similar to those tested but which is unreliable in unexpected circumstances. In contrast it is possible to embed the software development and testing process within a formal statistical design. In such a design, software testing can be used to make statistical inferences about the reliability of the future operation of the software. In turn, the process of systematically assessing reliability permits a certification of the product at delivery, that attests to a public record of defect detection and repair and to a measured level of operating reliability. This paper describes a procedure for certifying the reliability of software before its release to users. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (mean time to failure) of the product at the time of its release. The paper discusses the development of certified software products and the derivation of a statistical model used for reliability projection. Available software test data are used to demonstrate the application of the model in the certification process.
Software, Testing, Software reliability, Certification, Statistical analysis, Standards, statistical testing process, Incremental development, software reliability certification, software reliability models, statistical quality control
Harlan D. Mills, "Certifying the reliability of software", IEEE Transactions on Software Engineering, vol.12, no. 1, pp. 3-11, Jan. 1986, doi:10.1109/TSE.1986.6312914