This Article 
 Bibliographic References 
 Add to: 
Software Assurance by Bounded Exhaustive Testing
April 2005 (vol. 31 no. 4)
pp. 328-339
Bounded exhaustive testing (BET) is a verification technique in which software is automatically tested for all valid inputs up to specified size bounds. A particularly interesting case of BET arises in the context of systems that take structurally complex inputs. Early research suggests that the BET approach can reveal faults in small systems with inputs of low structural complexity, but its potential utility for larger systems with more complex input structures remains unclear. We set out to test its utility on one such system. We used Alloy and TestEra to generate inputs to test the Galileo dynamic fault tree analysis tool, for which we already had both a formal specification of the input space and a test oracle. An initial attempt to generate inputs using a straightforward translation of our specification to Alloy did not work well. The generator failed to generate inputs to meaningful bounds. We developed an approach in which we factored the specification, used TestEra to generate abstract inputs based on one factor, and passed the results through a postprocessor that reincorporated information from the second factor. Using this technique, we were able to generate test inputs to meaningful bounds, and the inputs revealed nontrivial faults in the Galileo implementation, our specification, and our oracle. Our results suggest that BET, combined with specification abstraction and factoring techniques, could become a valuable addition to our verification toolkit and that further investigation is warranted.

[1] W.R. Adrion, M.A. Branstad, and J.C. Cherniavsky, “Validation, Verification, and Testing of Computer Software,” Comm. ACM, vol. 14, no. 2, pp. 159-192, June 1982.
[2] R.J. Bayardo, Jr. and R.C. Schrag, “Using CSP Look-Back Techniques to Solve Real-World SAT Instances,” Proc. Nat'l Conf. Artificial Intelligence and Ann. Conf. Innovative Applications of Artificial Intelligence (AAAI/IAAI '97), pp. 203-208, July 1997.
[3] B. Beizer, Software Testing Techniques, second ed. New York: Van Nostrand Reinhold, 1990.
[4] C. Boyapati, S. Khurshid, and D. Marinov, “Korat: Automated Testing Based on Java Predicates,” Proc. Int'l Symp. Software Testing and Analysis (ISSTA '02), pp. 203-208, July 2002.
[5] M.A. Boyd, “Dynamic Fault Tree Models: Techniques for Analysis of Advanced Fault Tolerant Computer Systems,” PhD dissertation, Dept. of Computer Science, Duke Univ., Durham, N.C., Apr. 1991.
[6] J. Chang and D.J. Richardson, “Structural Specification-Based Testing: Automated Support and Experimental Evaluation,” Proc. European Software Eng. Conf. and ACM SIGSOFT Symp. Foundations of Software Eng., Sept. 1999.
[7] J.J. Chilenski and S.P. Miller, “Applicability of Modified Condition/Decision Coverage to Software Testing,” Software Eng. J., vol. 9, no. 5, pp. 193-200, Sept. 1994.
[8] D. Coppit, “Engineering Modeling and Analysis: Sound Methods and Effective Tools,” PhD dissertation, Dept. of Computer Science, The Univ. of Virginia, Charlottesville, Va., Jan. 2003.
[9] D. Coppit, R.R. Painter, and K.J. Sullivan, “Shared Semantic Domains for Computational Reliability Engineering,” Proc. Int'l Symp. Software Reliability Eng., pp. 168-180, Nov. 2003.
[10] D. Coppit and K.J. Sullivan, “Galileo: A Tool Built from Mass-Market Applications,” Proc. Int'l Conf. Software Eng., pp. 750-753, June 2000.
[11] D. Coppit and K.J. Sullivan, “Sound Methods and Effective Tools for Engineering Modeling and Analysis,” Proc. Int'l Conf. Software Eng., pp. 198-207, May 2003.
[12] D. Coppit, K.J. Sullivan, and J.B. Dugan, “Formal Semantics of Models for Computational Engineering: A Case Study on Dynamic Fault Trees,” Proc. Int'l Symp. Software Reliability Eng., pp. 270-282, Oct. 2000.
[13] J. Dick and A. Faivre, “Automating the Generation and Sequencing of Test Cases from Model-Based Specifications,” Proc. Conf. Formal Methods Eng.: Industrial-Strength Formal Methods, pp. 268-284, Apr. 1993.
[14] S.A. Doyle and J.B. Dugan, “Dependability Assessment Using Binary Decision Diagrams (BDDs),” Proc. Int'l Fault-Tolerant Computing Symp., pp. 249-258, July 1995.
[15] J.B. Dugan, S. Bavuso, and M. Boyd, “Dynamic Fault-Tree Models for Fault-Tolerant Computer Systems,” IEEE Trans. Relativity, vol. 41, no. 3, pp. 363-377, Sept. 1992.
[16] J.B. Dugan, K.J. Sullivan, and D. Coppit, “Developing a Low-Cost High-Quality Software Tool for Dynamic Fault Tree Analysis,” IEEE Trans. Relativity, vol. 49, no. 1, pp. 49-59, Mar. 2000.
[17] P. Godefroid, “Model Checking for Programming Languages Using VeriSoft,” Proc. Conf. Principles of Programming Languages, pp. 174-186, Jan. 1997.
[18] R. Gulati and J.B. Dugan, “A Modular Approach for Analyzing Static and Dynamic Fault Trees,” Proc. Ann. Reliability and Maintainability Conf., pp. 57-63, Jan. 1997.
[19] M.J. Harrold, “Testing: A Roadmap,” Proc. ICSE '00— Future of Software Eng., pp. 61-72, June 2000.
[20] K. Havelund and T. Pressburger, “Model Checking Java Programs Using Java PathFinder,” Int'l J. Software Tools for Technology Transfer, vol. 2, no. 4, Apr. 2000.
[21] S. Helke, T. Neustupny, and T. Santen, “Automating Test Case Generation from Z Specifications with Isabelle,” LNCS, vol. 1212, pp. 52-71, 1997.
[22] H.-M. Horcher, “Improving Software Tests Using Z Specifications,” LNCS, vol. 967, pp. 152-166, 1995.
[23] D. Jackson, “Micromodels of Software: Modelling and Analysis with Alloy,” , 2001.
[24] D. Jackson and C.A. Damon, “Elements of Style: Analyzing a Software Design Feature with a Counterexample Detector,” IEEE Trans. Software Eng., vol. 22, no. 7, pp. 484-495, July 1996.
[25] D. Jackson, I. Scheckter, and I. Shlyakhter, “Alcoa: the Alloy Constraint Analyzer,” Proc. Int'l Conf. Software Eng., pp. 730-733, June 2000.
[26] S. Khurshid and D. Marinov, “Checking Java Implementation of a Naming Architecture Using TestEra,” Electronic Notes in Theoretical Computer Science, vol. 55, no. 3, 2001.
[27] S. Khurshid, D. Marinov, I. Shlyakhter, and D. Jackson, “A Case for Efficient Solution Enumeration,” LNCS, vol. 2919, pp. 272-286, 2004.
[28] S. Khurshid and C. Pasareanu, “Generalized Symbolic Execution for Model Checking and Testing,” Proc. Int'l Conf. Tools and Algorithms for Construction and Analysis of Systems, Apr. 2003.
[29] J.C. King, “Symbolic Execution and Program Testing,” Comm. ACM, vol. 19, no. 7, pp. 385-394, July 1976.
[30] J.C. Knight and N.G. Leveson, “An Experimental Evaluation of the Assumption of Independence in Multiversion Programming,” IEEE Trans. Software Eng., vol. 12, pp. 96-109, Jan. 1986.
[31] I. MacColl, D. Carrington, and P. Stocks, “An Experiment in Specification-Based Testing,” Technical Report 96-05, Software Verification Research Centre, Dept. of Computer Science, The Univ. of Queensland, May 1996.
[32] D. Marinov, A. Andoni, D. Daniliuc, S. Khurshid, and M. Rinard, “An Evaluation of Exhaustive Testing for Data Structures,” Technical Report MIT-LCS-TR-921, MIT CSAIL, Cambridge, Mass., Sept. 2003.
[33] D. Marinov and S. Khurshid, “TestEra: A Novel Framework for Automated Testing of Java Programs,” Proc. Int'l Conf. Automated Software Eng., Nov. 2001.
[34] L. Morell, “A Theory of Fault-Based Testing,” IEEE Trans. Software Eng., vol. 16, no. 8, pp. 844-857, Aug. 1990.
[35] M.W. Moskewicz, C.F. Madigan, Y. Zhao, L. Zhang, and S. Malik, “Chaff: Engineering an Efficient SAT Solver,” Proc. Design Automation Conf., June 2001.
[36] J. Offutt and A. Abdurazik, “Generating Tests from UML Specifications,” Proc. Int'l Conf. Unified Modeling Language, Oct. 1999.
[37] S. Rapps and E. Weyuker, “Selecting Software Test Data Using Data Flow Information,” IEEE Trans. Software Eng., vol. 11, no. 4, pp. 367-375, Apr. 1985.
[38] M. Saaltink, “The Z/EVES System,” LNCS, vol. 1212, pp. 72-85, 1997.
[39] E. Schwartz, “Design and Implementation of Intentional Names,” master's thesis, MIT Laboratory for Computer Science, Massachusetts Inst. of Technology, Cambridge, June 1999.
[40] L. Shlyakhter, “Generating Effective Symmetry-Breaking Predicates for Search Problems,” Proc. Workshop Theory and Applications of Satisfiability Testing, June 2001.
[41] J.M. Spivey, The Z Notation: A Reference Manual. Prentice-Hall, 1992.
[42] P. Stocks and D. Carrington, “A Framework for Specification-Based Testing,” IEEE Trans. Software Eng., vol. 22, no. 11, pp. 777-793, Nov. 1996.
[43] K.J. Sullivan, J.B. Dugan, and D. Coppit, “The Galileo Fault Tree Analysis Tool,” Proc. Int'l Fault-Tolerant Computing Symp., pp. 232-235, June 1999.
[44] K.J. Sullivan, J. Yang, D. Coppit, S. Khurshid, and D. Jackson, “Software Assurance by Bounded Exhaustive Testing,” Proc. Int'l Symp. Software Testing and Analysis, pp. 133-142, July 2004.
[45] Sun Microsystems, Java 2 platform, standard edition, v1.3.1 API Specification,, 2001.
[46] W.E. Vesely, F.F. Goldberg, N.H. Roberts, and D.F. Haasl, “Fault Tree Handbook,” Technical Report NUREG-0492, US Nuclear Regulatory Commission, Washington, D.C., 1981.
[47] E.J. Weyuker and T.J. Ostrand, “Theories of Program Testing and the Application of Revealing Subdomains,” IEEE Trans. Software Eng., vol. 6, no. 3, pp. 236-246, May 1980.
[48] H. Zhu, P.A.V. Hall, and J.H.R. May, “Software Unit Test Coverage and Adequacy,” ACM Computer Surveys, vol. 29, no. 4, pp. 366-427, Dec. 1997.

Index Terms:
Index Terms- Formal methods, program verification, testing and debugging.
David Coppit, Jinlin Yang, Sarfraz Khurshid, Wei Le, Kevin Sullivan, "Software Assurance by Bounded Exhaustive Testing," IEEE Transactions on Software Engineering, vol. 31, no. 4, pp. 328-339, April 2005, doi:10.1109/TSE.2005.52
Usage of this product signifies your acceptance of the Terms of Use.