This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Provable Improvements on Branch Testing
October 1993 (vol. 19 no. 10)
pp. 962-975

This paper compares the fault-detecting ability of several software test data adequacy criteria. It has previously been shown that if C/sub 1/ properly covers C/sub 2/, then C/sub 1/ is guaranteed to be better at detecting faults than C/sub 2/, in the following sense: a test suite selected by independent random selection of one test case from each subdomain induced by C/sub 1/ is at least as likely to detect a fault as a test suite similarly selected using C/sub 2/. In contrast, if C/sub 1/ subsumes but does not properly cover C/sub 2/, this is not necessarily the case. These results are used to compare a number of criteria, including several that have been proposed as stronger alternatives to branch testing. We compare the relative fault-detecting ability of data flow testing, mutation testing, and the condition-coverage techniques, to branch testing, showing that most of the criteria examined are guaranteed to be better than branch testing according to two probabilistic measures. We also show that there are criteria that can sometimes be poorer at detecting faults than substantially less expensive criteria.

[1] T. A. Budd, "Mutation analysis: Ideas, examples, problems and prospects," inComputer Program Testing, B. Chandrasekaran and S. Radicchi, Eds. Amsterdam: North Holland, July 1981, pp. 129-148.
[2] L. Clarke, A. Podgurski, D. Richardson, and S. Zeil, "A formal evaluation of data flow path selection criteria,"IEEE Trans. Software Eng., vol. 15, no. 11, pp. 244-251, Nov. 1989.
[3] R. A. DeMillo, R. J. Lipton, and F. G. Sayward, "Hints on test data selection: Help for the practicing programmer,"Computer, vol. 11, no. 4, pp. 34-41, Apr. 1978.
[4] R. A. DeMillo and A. J. Offutt, "Experimental results from an automatic test case generator,"ACM Trans. Software Eng. Methodology, vol. 2, no. 2, pp. 109-127, Apr. 1993.
[5] J. W. Duran and S. C. Ntafos, "An evaluation of random testing,"IEEE Trans. Software Eng., vol. SE-10, no. 7, pp. 438-444, July 1984.
[6] L. D. Fosdick and L. J. Osterweil, "Data flow analysis in software reliability,"ACM Comput. Surveys, vol. 8, no. 3, pp. 306-330, Sept. 1976.
[7] P. G. Frankl, "Test selection for analytical comparability of fault-detecting ability," Computer Science Dept., Polytechnic Univ., Brooklyn, NY, Tech. Rep. PUCS-100-93, Jan. 1993.
[8] P. G. Frankl and E. J. Weyuker, "An applicable family of data flow testing criteria,"IEEE Trans. Software Eng., vol. 14, no. 10, pp. 1483-1498, Oct. 1988.
[9] P. G. Frankl and E. J. Weyuker, "Assessing the fault-detecting ability of testing methods," inACM SIGSOFT'91 Conf. Software for Critical Systems, ACM Press, Dec. 1991, pp. 77-91.
[10] P. G. Frankl and E. J. Weyuker, "Analytical comparison of several testing strategies," Computer Science Dept., Polytechnic Univ., Brooklyn, NY, Tech. Rep. PUCS-100-92, July 1992.
[11] P. G. Frankl and E. J. Weyuker, "A formal analysis of the fault detecting ability of testing methods,"IEEE Trans. Software Eng., vol. 19, no. 3, pp. 202-213, Mar. 1993.
[12] J. Gannon, P. McMullin, and R. Hamlet, "Data-abstraction implementation, specification and testing,"ACM Trans. Program. Lang. syst., vol 3, no. 3, pp. 211-223, July 1981.
[13] J. S. Gourlay, "A mathematical framework for the investigation of testing,"IEEE Trans. Software Eng., vol. SE-9, no. 6, pp. 686-709, Nov. 1983.
[14] R. Hamlet, "Theoretical comparison of testing methods," inProc. 3rd Symp. Testing, Analysis, and Verification, Dec. 1989, pp. 28-37.
[15] D. Hamlet and R. Taylor, "Partition testing does not inspire confidence,"IEEE Trans. Software Eng., vol. 16, no. 12, pp. 1402-1411, Dec. 1990.
[16] J. W. Laski and B. Korel, "A data flow oriented program testing strategy,"IEEE Trans. Software Eng., vol. 9, no. 3, pp. 347-354, May 1983.
[17] G. J. Myers,The Art of Software Testing. New York: Wiley, 1979.
[18] S. Ntafos, "On required element testing.IEEE Trans. Software Eng., vol. SE-10, no. 6, pp. 795-803, Nov. 1984.
[19] S. Rapps and E. J. Weyuker, "Data flow analysis techniques for program test data selection," inProc. 6th Int. Conf. Software Eng., Sept. 1982, pp. 272-278.
[20] S. Rapps and E. J. Weyuker, "Selecting software test data using data flow information,"IEEE Trans. Software Eng., vol. SE-11, no. 4, pp. 367-375, Apr. 1985.
[21] H. Ural and B. Yang, "A structural test selection criterion,"Inform. Processing Lett., vol. 28, no. 3, pp. 157-163, July 1988.
[22] M. Weiser, J. Gannon, and P. McMullin, "Comparison of structural test coverage matrics,"IEEE Software, vol. 2, no. 2, pp. 80-85, Mar. 1985.
[23] S. N. Weiss, "What to compare when comparing test data adequacy criteria,"Sofrware Eng. Notes, vol. 14, no. 6, pp. 42-49, Oct. 1989.
[24] E. J. Weyuker and B. Jeng, "Analyzing partition testing strategies,"IEEE Trans. Software Eng., vol. 17, no. 7, pp. 703-711, July 1991.
[25] E. J. Weyuker, S. N. Weiss, and D. Hamlet, "Comparison of program testing strategies," inProc. 4th Symp. Software Testing, Analysis, and Verification, ACM Press, Oct. 1991, pp. 1-10.

Index Terms:
branch testing; fault-detecting ability; software test data adequacy; test suite; independent random selection; data flow testing; mutation testing; condition-coverage techniques; probabilistic measure; software testing; program debugging; program testing; programming theory
Citation:
P.G. Frankl, E.J. Weyuker, "Provable Improvements on Branch Testing," IEEE Transactions on Software Engineering, vol. 19, no. 10, pp. 962-975, Oct. 1993, doi:10.1109/32.245738
Usage of this product signifies your acceptance of the Terms of Use.