The Community for Technology Leaders
RSS Icon
Issue No.11 - November (1996 vol.29)
pp: 61-68
<p>The testing of software systems is subject to strong conflicting forces. A system must function sufficiently reliably for its application, but it must also reach the market at the same time as its competitors (preferably before) and at a competitive cost. Some systems may be less market-driven than others, but balancing reliability, time of delivery, and cost is always important. One of the most effective ways to do this is to engineer the test process through quantitative planning and tracking. Unfortunately, most software testing is not engineered, and the resulting product may not be as reliable as it should be, and/or it may be too late or inexpensive. Software-reliability-engineered testing combines the use of quantitative reliability objectives and operational profiles (profiles of system use). The operational profile guides developers in testing more realistically, which makes it possible to track the reliability actually being achieved. This article describes SRET in the context of an actual AT&T project. SRET is an AT&T current best practice. Qualification as an AT&T best practice requires use on eight to 10 projects and large benefit/cost ratios. Practitioners have generally found SRET to be unique in offering a standard proven means to engineer and manage testing in a way that lets them increase their confidence in the reliability of the software-based system they developed. </p>
John D. Musa, "Software-Reliability-Engineered Testing", Computer, vol.29, no. 11, pp. 61-68, November 1996, doi:10.1109/2.544239
31 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool