This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing
August 1993 (vol. 19 no. 8)
pp. 774-787

An experiment comparing the effectiveness of the all-uses and all-edges test data adequacy criteria is discussed. The experiment was designed to overcome some of the deficiencies of previous software testing experiments. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of executable edges and definition-use associations covered were measured, and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. Logistic regression analysis was used to investigate whether the probability that a test set exposes an error increases as the percentage of definition-use associations or edges covered by it increases. Error exposing ability was shown to be strongly positively correlated to percentage of covered definition-use associations in only four of the nine subjects. Error exposing ability was also shown to be positively correlated to the percentage of covered edges in four different subjects, but the relationship was weaker.

[1] A. Agresti,Analysis of Ordinal Categorical Data. New York: Wiley, 1984.
[2] V. R. Basili and R. W. Selby, "Comparing the effectiveness of software testing strategies,"IEEE Trans. Software Eng., vol. SE-13, no. 12, pp. 1278-1296, Dec. 1987.
[3] G. K. Bhattacharyya and R. A. Johnson,Statistical Concepts and Methods. New York: Wiley, 1977.
[4] R. Boyer, B. Elspas, and K. Levitt, "SELECT--A formal system for testing and debugging programs by symbolic execution,"SIGPLAN Notices, vol. 10, no. 6, pp. 234-245, June 1975.
[5] D. R. Byrkit,Elements of Statistics. New York: van Nostrand, 1980.
[6] L. Clarke, A. Podgurski, D. Richardson, and S. Zeil, "A formal evaluation of data flow path selection criteria,"IEEE Trans. Software Eng., vol. 15, pp. 244-251, Nov. 1989.
[7] D. Cooper and M. Clancy,Oh! Pascal!. New York: W.W. Norton, 1982.
[8] R. A. DeMillo, D. S. Guindi, K. N. King, W. M. McCracken, and A. J. Offutt, "An extended overview of the Mothra software testing environment," inProc. Second Workshop Software Testing, Verification and Analysis, Banff, Alberta, July 1988, pp. 142-151.
[9] R. A. DeMillo, R. J. Lipton, and F. G. Sayward, "Hints on test data selection: Help for the practicing programmer,"Computer, vol. 11, pp. 34-41, Apr. 1978.
[10] J. W. Duran and S. C. Ntafos, "An evaluation of random testing,"IEEE Trans. Software Eng., vol. SE-10, pp. 438-444, July 1984.
[11] P. G. Frankl, "ASSET user's manual," Tech. Rep. 318, Comput. Sci. Dept., Courant Inst. Math. Sci., New York Univ., New York, NY, Sept. 1987.
[12] P. G. Frankl, "The use of data flow information for the selection and evaluation of software test data," Doctoral dissertation, New York Univ., New York, 1987.
[13] P. G. Frankl, "Partial symbolic evaluation of path expressions (version 2)," Tech. Rep. PUCS-105-90, Comput. Sci. Dept., Polytechnic Univ., Brooklyn, NY, July 1990.
[14] P. G. Frankl, S. N. Weiss, and E. J. Weyuker, "ASSET--A system to select and evaluate tests," inProc. IEEE Conf. Software Tools, Apr. 1985, pp. 72-79.
[15] P. G. Frankl and E. J. Weyuker, "An applicable family of data flow testing criteria,"IEEE Trans. Software Eng., vol. 14, pp. 1483-1498, Oct. 1988.
[16] P. G. Frankl and E. J. Weyuker, "Assessing the fault-detecting ability of testing methods," inACM SIGSOFT'91 Conf. Software for Critical Systems, ACM Press, Dec. 1991, pp. 77-91.
[17] P. G. Frankl and E. J. Weyuker, "A formal analysis of the fault detecting ability of testing methods," to be published inIEEE Trans. Software Eng., vol. SE-19, pp. 202-213, Mar. 1993.
[18] M. Girgis and M. Woodward, "An experimental comparison of the error exposing ability of program testing criteria," inProc. IEEE Workshop Software Testing. New York: IEEE Computer Society Press, July 1986, pp. 64-73.
[19] J. Goodenough and S. Gerhart, "Toward a theory of test data selection,"IEEE Trans. Software Eng., vol. SE-1, pp. 156-173, June 1975.
[20] F. Gustavson, "Remark on algorithm 408,"ACM Trans. Math. Software, no. 4, p. 295, 1978.
[21] R. Hamlet, "Theoretical comparison of testing methods," inProc. 3rd Symp. Testing, Analysis, and Verification, Dec. 1989, pp. 28-37.
[22] D. Hamlet and R. Taylor, "Partition testing does not inspire confidence,"IEEE Trans. Software Eng., vol. 16, pp. 206-215, Dec. 1990.
[23] P. Herman, "A data flow analysis approach to program testing,"Australian Computer J., vol. 8, no. 3, pp. 92-96, Nov. 1976.
[24] C. Hoare, "Proof of a program: Find,"Commun. Ass. Commun. Mach., vol. 14, no. 1, p. 39, 1971.
[25] W. E. Howden, "A survey of dynamic analysis methods," inTutorial: Software Testing and Validation Techniques. New York: IEEE Computer Society Press, 1978, pp. 209-231.
[26] J. C. Huang, "An approach to program testing,"ACM Comput. Surveys, vol. 7, no. 3, pp. 113-128, Sept. 1975.
[27] J. Knight and N. Leveson, "An experimental evaluation of the assumption of independence in multiversion programming,"IEEE Trans. Software Eng., vol. SE-12, no. 1, pp. 96-109, Jan. 1986.
[28] B. Korel and J. Laski, "STAD--a system for testing and debugging: User perspective," inProc. 2nd Workshop on Software Testing, Verification and Analysis (Banff, Alberta, Can.), July 1988, pp. 13-20.
[29] J. W. Laski and B. Korel, "A data flow oriented program testing strategy,"IEEE Trans. Software Eng., vol. SE-9, no. 3, pp. 347-354, May 1983.
[30] J. M. McNamee, "Algorithm 408: A sparse matrix package (part I) [F4],"Commun. Ass. Comput. Mach., vol. 14, Apr. 1971.
[31] S. Ntafos, "On required element testing,"IEEE Trans. Software Eng., vol. SE-10, no. 6, pp. 795-803, Nov. 1984.
[32] A. J. Offutt, "Investigations of the software testing coupling effect,"ACM Trans. Software Engineering Methodology, vol. 1, pp. 5-20, Jan. 1992.
[33] W.H. Press et al.,Numerical Recipes--The Art of Scientific Computing, Cambridge University Press, 1986, p. 254.
[34] S. Rapps and E. J. Weyuker, "Selecting software test data using data flow information,"IEEE Trans. Software Eng., vol. SE-11, no. 4, pp. 367-375, Apr. 1985.
[35] J. H. Rowland and Y. Zuyuan, "Experimental comparison of three system test strategies: Preliminary report," inProc. ACM SIGSOFT 3rd Symp. Software Testing, Analysis, and Verification. ACM Press, Dec. 1989, pp. 141-145.
[36] M. Vouk, D. McAllister, and K. Tai, "An experimental evaluation of the effectiveness of random testing of fault-tolerant software," inProc. IEEE Workshop Software Testing. New York: IEEE Computer Society Press, July 1986, pp. 74-81.
[37] S. N. Weiss, "Methods of comparing test data adequacy criteria," inCOMPSAC 90, Oct. 1990, pp. 1-6.
[38] S. N. Weiss and P. G. Frankl, "Comparison of all-uses and all-edges: Design, data, and analysis," Tech. Rep. CS-91-03, Computer Science Dept., Hunter College, NY, Mar. 1991.
[39] E. J. Weyuker and B. Jeng, "Analyzing partition testing strategies,"IEEE Trans. Software Eng., vol. 17, pp. 703-711, July 1991.
[40] E. J. Weyuker, S. N. Weiss, and D. Hamlet, "Comparison of program testing strategies," inProc. 4th Symp. Software Testing, Analysis, and Verification, ACM Press, Oct. 1991, pp. 1-10.

Index Terms:
error exposing ability; branch testing; data flow testing; all-edges test data adequacy criteria; software testing experiments; executable edges; definition-use associations; all-uses adequate test sets; regression analysis; errors; program testing
Citation:
P.G. Frankl, S.N. Weiss, "An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing," IEEE Transactions on Software Engineering, vol. 19, no. 8, pp. 774-787, Aug. 1993, doi:10.1109/32.238581
Usage of this product signifies your acceptance of the Terms of Use.