The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - March/April (1990 vol.7)
pp: 56-64
ABSTRACT
<p>The problem of knowing when to stop testing software is considered, focusing on the strategy of stopping when a reliability level or rate of failure occurrence acceptable to the customer is reached. The system's reliability is monitored throughout the system test, and the system is released to the field only when the measured reliability is at or above this objective. This approach was applied to test-failure data collected on Remote Measurement System-Digital 1, a large telecommunications testing system that had already gone through system test and been released to the field. The RMS-D1 failure data, which consisted of command-response errors versus commands executed, had been routinely collected by the system-test organization during testing. The testing phase analyzed, the load test, was an operational-profile-driven test in which a controlled load was imposed on the system reflective of the system's busy-hour usage pattern. It was found to be feasible to apply the reliability-measurement approach in real time, to systems actually undergoing system test, given a controlled load-test environment.</p>
INDEX TERMS
software reliability measurement; rate of failure occurrence; Remote Measurement System-Digital 1; telecommunications testing system; controlled load-test environment; software reliability
CITATION
S. Keith Lee, Willa K. Ehrlich, "Applying Reliability Measurement: A Case Study", IEEE Software, vol.7, no. 2, pp. 56-64, March/April 1990, doi:10.1109/52.50774
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool