The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - July/August (2008 vol.34)
pp: 452-470
Matthew J. Rutherford , University of Colorado at Boulder, Boulder
Antonio Carzaniga , University of Lugano, Lugano
Alexander L. Wolf , Imperial College London, London
ABSTRACT
Test adequacy criteria provide the engineer with guidance on how to populate test suites. While adequacy criteria have long been a focus of research, existing testing methods do not address many of the fundamental characteristics of distributed systems, such as distribution topology, communication failure, and timing. Furthermore, they do not provide the engineer with a means to evaluate the relative effectiveness of different criteria, nor the relative effectiveness of adequate test suites satisfying a given criterion. This paper makes three contributions to the development and use of test adequacy criteria for distributed systems: (1) a testing method based on discrete-event simulations; (2) a fault-based analysis technique for evaluating test suites and adequacy criteria; and (3) a series of case studies that validate the method and technique. The testing method uses a discrete-event simulation as an operational specification of a system, in which the behavioral effects of distribution are explicitly represented. Adequacy criteria and test cases are then defined in terms of this simulation-based specification. The fault-based analysis involves mutation of the simulation-based specification to provide a foil against which test suites and the criteria that formed them can be evaluated. Three distributed systems were used to validate the method and technique, including DNS, the Domain Name System.
INDEX TERMS
Specification, Test coverage of specifications
CITATION
Matthew J. Rutherford, Antonio Carzaniga, Alexander L. Wolf, "Evaluating Test Suites and Adequacy Criteria Using Simulation-Based Models of Distributed Systems", IEEE Transactions on Software Engineering, vol.34, no. 4, pp. 452-470, July/August 2008, doi:10.1109/TSE.2008.33
REFERENCES
[1] M. Allman and A. Falk, “On the Effective Evaluation of TCP,” SIGCOMM Computer Comm. Rev., vol. 29, no. 5, pp. 59-70, 1999.
[2] P.E. Ammann and P.E. Black, “A Specification-Based Coverage Metric to Evaluate Test Sets,” Int'l J. Reliability, Quality and Safety Eng., vol. 8, no. 4, pp. 275-299, 2001.
[3] J. Andrews, L. Briand, and Y. Labiche, “Is Mutation an Appropriate Tool for Testing Experiments?” Proc. 27th Int'l Conf. Software Eng., pp. 402-411, May 2005.
[4] G. Bochmann and A. Petrenko, “Protocol Testing: Review of Methods and Relevance for Software Testing,” Proc. ACM SIGSOFT Int'l Symp. Software Testing and Analysis, pp. 109-124, 1994.
[5] L.C. Briand, Y. Labiche, and Y. Wang, “Using Simulation to Empirically Investigate Test Coverage Criteria Based on Statecharts,” Proc. 26th Int'l Conf. Software Eng., pp. 86-95, 2004.
[6] R.H. Carver and K.-C. Tai, “Use of Sequencing Constraints for Specification-Based Testing of Concurrent Programs,” IEEE Trans. Software Eng., vol. 24, no. 6, pp. 471-490, June 1998.
[7] J. Chang and D.J. Richardson, “Structural Specification-Based Testing: Automated Support and Experimental Evaluation,” Proc. Seventh European Software Eng. Conf./Seventh ACM SIGSOFT Int'l Symp. Foundations of Software Eng., pp. 285-302, 1999.
[8] J. Chang, D.J. Richardson, and S. Sankar, “Structural Specification-Based Testing with ADL,” Proc. ACM SIGSOFT Int'l Symp. Software Testing and Analysis, pp. 62-70, 1996.
[9] I. Clarke, S.G. Miller, T.W. Hong, O. Sandberg, and B. Wiley, “Protecting Free Expression Online with Freenet,” IEEE Internet Computing, vol. 6, no. 1, pp. 40-49, Jan./Feb. 2002.
[10] R.A. DeMillo, R. Lipton, and F. Sayward, “Hints on Test Data Selection: Help for the Practicing Programmer,” Computer, vol. 11, no. 4, pp. 34-41, Apr. 1978.
[11] G. Denaro, A. Polini, and W. Emmerich, “Early Performance Testing of Distributed Software Applications,” Proc. Fourth Int'l Workshop Software and Performance), pp. 94-103, 2004.
[12] J.L. Devore, Probability and Statistics for Engineering and the Sciences. Brooks/Cole, 1995.
[13] E.J. Dowling, “Testing Distributed Ada Programs,” Proc. Conf. Tri-Ada '89, pp. 517-527, 1989.
[14] J.W. Duran and S. Ntafos, “A Report on Random Testing,” Proc. Fifth Int'l Conf. Software Eng., pp. 179-183, 1981.
[15] S.C.P.F. Fabbri, J.C. Maldonado, T. Sugeta, and P.C. Masiero, “Mutation Testing Applied to Validate Specifications Based on Statecharts,” Proc. 10th Int'l Symp. Software Reliability Eng., pp. 210-221, 1999.
[16] P. Frankl and O. Iakounenko, “Further Empirical Studies of Test Effectiveness,” Proc. Sixth ACM SIGSOFT Int'l Symp. Foundations of Software Eng., pp. 153-162, 1998.
[17] P. Frankl and S. Weiss, “An Experimental Comparison of the Effectiveness of the All-Uses and All-Edges Adequacy Criteria,” Proc. Symp. Testing, Analysis, and Verification, pp. 154-164, 1991.
[18] P. Frankl and S. Weiss, “An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing,” IEEE Trans. Software Eng., vol. 19, no. 8, pp. 774-787, Aug. 1993.
[19] P. Frankl, S. Weiss, and C. Hu, “All-Uses versus Mutation Testing: An Experimental Comparison of Effectiveness,” J. Systems and Software, vol. 38, no. 3, pp. 235-253, 1997.
[20] P. Frankl and E. Weyuker, “An Applicable Family of Data Flow Testing Criteria,” IEEE Trans. Software Eng., vol. 14, no. 10, pp.1483-1498, Oct. 1988.
[21] P. Frankl and E. Weyuker, “An Analytical Comparison of the Fault-Detecting Ability of Data Flow Testing Techniques,” Proc. 15th Int'l Conf. Software Eng., pp. 415-424, 1993.
[22] P. Frankl and E. Weyuker, “A Formal Analysis of the Fault-Detecting Ability of Testing Methods,” IEEE Trans. Software Eng., vol. 19, no. 3, pp. 202-213, Mar. 1993.
[23] S. Ghosh, N. Bawa, G. Craig, and K. Kalgaonkar, “A Test Management and Software Visualization Framework for Heterogeneous Distributed Applications,” Proc. Sixth IEEE Int'l Symp. High-Assurance Systems Eng., pp. 106-116, 2001.
[24] S. Ghosh, N. Bawa, S. Goel, and Y.R. Reddy, “Validating Run-Time Interactions in Distributed Java Applications,” Proc. Eighth IEEE Int'l Conf. Eng. Complex Computer Systems, pp. 7-16, 2002.
[25] S. Ghosh and A.P. Mathur, “Issues in Testing Distributed Component-Based Systems,” Proc. First Int'l ICSE Workshop Testing Distributed Component Based Systems, citeseer.ist.psu.edu ghosh99issues.html, May 1999.
[26] S. Ghosh and A.P. Mathur, “Interface Mutation,” J. Software Testing, Verification, and Reliability, vol. 11, no. 4, pp. 227-247, Dec. 2001.
[27] J. Grundy, Y. Cai, and A. Liu, “SoftArch/MTE: Generating Distributed System Test-Beds from High-Level Software Architecture Descriptions,” Automated Software Eng., vol. 12, no. 1, pp.5-39, 2005.
[28] D. Hamlet and R. Taylor, “Partition Testing Does Not Inspire Confidence,” IEEE Trans. Software Eng., vol. 16, no. 12, pp. 1402-1411, Dec. 1990.
[29] M. Harder, J. Mellen, and M.D. Ernst, “Improving Test Suites via Operational Abstraction,” Proc. 25th Int'l Conf. Software Eng., pp.60-71, 2003.
[30] R.M. Hierons, “Comparing Test Sets and Criteria in the Presence of Test Hypotheses and Fault Domains,” ACM Trans. Software Eng. and Methodology, vol. 11, no. 4, pp. 427-448, 2002.
[31] A. Hubbard, C.M. Woodside, and C. Schramm, “DECALS: Distributed Experiment Control and Logging System,” Proc. Conf. Centre for Advanced Studies on Collaborative Research, p. 32, 1995.
[32] D. Hughes, P. Greenwood, and G. Coulson, “A Framework for Testing Distributed Systems,” Proc. Fourth IEEE Int'l Conf. Peer-to-Peer Computing, pp. 262-263, 2004.
[33] M. Hutchins, H. Foster, T. Goradia, and T. Ostrand, “Experiments of the Effectiveness of Dataflow- and Controlflow-Based Test Adequacy Criteria,” Proc. 16th Int'l Conf. Software Eng., pp. 191-200, 1994.
[34] Z. Jin and J. Offutt, “Deriving Tests from Software Architectures,” Proc. 12th Int'l Symp. Software Reliability Eng., pp. 308-313, Nov. 2001.
[35] P. Krishnamurthy and P.A.G. Sivilotti, “The Specification and Testing of Quantified Progress Properties in Distributed Systems,” Proc. 23rd Int'l Conf. Software Eng., pp. 201-210, 2001.
[36] D.R. Kuhn, “Fault Classes and Error Detection Capability of Specification-Based Testing,” ACM Trans. Software Eng. and Methodology, vol. 8, no. 4, pp. 411-424, 1999.
[37] J.F. Kurose and K.W. Ross, Computer Networking: A Top-Down Approach Featuring the Internet. Pearson Benjamin Cummings, 2004.
[38] R. Lai, “A Survey of Communication Protocol Testing,” J. Systems and Software, vol. 62, no. 1, pp. 21-46, 2002.
[39] C. Liu and P. Cao, “Maintaining Strong Cache Consistency in the World-Wide Web,” Proc. 17th Int'l Conf. Distributed Computing Systems, pp. 12-21, 1997.
[40] B. Long and P. Strooper, “A Case Study in Testing Distributed Systems,” Proc. Third Int'l Symp. Distributed Objects and Applications, pp. 20-29, 2001.
[41] Y.-S. Ma, J. Offutt, and Y.-R. Kwon, “MuJava: An Automated Class Mutation System,” J. Software Testing, Verification, and Reliability, vol. 15, no. 2, pp. 97-133, 2005.
[42] H. Muccini, A. Bertolino, and P. Inverardi, “Using Software Architecture for Code Testing,” IEEE Trans. Software Eng., vol. 30, no. 3, pp. 160-171, Mar. 2004.
[43] S. Ntafos, “On Random and Partition Testing,” Proc. ACM SIGSOFT Int'l Symp. Software Testing and Analysis, pp. 42-48, 1998.
[44] A.J. Offutt, A. Lee, G. Rothermel, R.H. Untch, and C. Zapf, “An Experimental Determination of Sufficient Mutant Operators,” ACM Trans. Software Eng. and Methodology, vol. 5, no. 2, pp. 99-118, 1996.
[45] A.J. Offutt and S. Liu, “Generating Test Data from SOFL Specifications,” J. Systems and Software, vol. 49, no. 1, pp. 49-62, 1999.
[46] J. Offutt, S. Liu, A. Abdurazik, and P. Ammann, “Generating Test Data from State-Based Specifications,” J. Software Testing, Verification and Reliability, vol. 13, no. 1, pp. 25-53, Mar. 2003.
[47] L. Osterweil, “Strategic Directions in Software Quality,” ACM Computing Surveys, vol. 28, no. 4, pp. 738-750, 1996.
[48] T.J. Ostrand and M.J. Balcer, “The Category-Partition Method for Specifying and Generating Functional Tests,” Comm. ACM, vol. 31, no. 6, pp. 676-686, 1988.
[49] D.E. Perry and A.L. Wolf, “Foundations for the Study of Software Architecture,” ACM SIGSOFT Software Eng. Notes, vol. 17, no. 4, pp. 40-52, 1992.
[50] M.D. Rice and S.B. Seidman, “An Approach to Architectural Analysis and Testing,” Proc. Third Int'l Software Architecture Workshop, pp. 121-123, 1998.
[51] D. Richardson, O. O'Malley, and C. Tittle, “Approaches to Specification-Based Testing,” Proc. ACM SIGSOFT '89: Third Symp. Software Testing, Analysis, and Verification, pp. 86-96, 1989.
[52] D. Richardson and M. Thompson, “An Analysis of Test Data Selection Criteria Using the RELAY Model of Fault Detection,” IEEE Trans. Software Eng., vol. 19, no. 6, pp. 533-553, June 1993.
[53] D.J. Richardson and A.L. Wolf, “Software Testing at the Architectural Level,” Proc. Joint Second Int'l Software Architecture Workshop and Int'l Workshop Multiple Perspectives in Software Development, pp. 68-71, 1996.
[54] S.D.R.S.D. Souza, J.C. Maldonado, S.C.P.F. Fabbri, and W.L.D. Souza, “Mutation Testing Applied to Estelle Specifications,” Software Quality Control, vol. 8, no. 4, pp. 285-301, 1999.
[55] I. Stoica, R. Morris, D. Liben-Nowell, D.R. Karger, M.F. Kaashoek, F. Dabek, and H. Balakrishnan, “Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications,” IEEE/ACM Trans. Networking, vol. 11, no. 1, pp. 17-32, 2003.
[56] T. Tsuchiya and T. Kikuno, “On Fault Classes and Error Detection Capability of Specification-Based Testing,” ACM Trans. Software Eng. and Methodology, vol. 11, no. 1, pp. 58-62, 2002.
[57] Y. Wang, M.J. Rutherford, A. Carzaniga, and A.L. Wolf, “Automating Experimentation on Distributed Testbeds,” Proc. 20th IEEE/ACM Int'l Conf. Automated Software Eng., pp. 164-173, Nov. 2005.
[58] E. Weyuker, T. Goradia, and A. Singh, “Automatically Generating Test Data from a Boolean Specification,” IEEE Trans. Software Eng., vol. 20, no. 5, pp. 353-363, May 1994.
[59] E.J. Weyuker and B. Jeng, “Analyzing Partition Testing Strategies,” IEEE Trans. Software Eng., vol. 17, no. 7, pp. 703-711, July 1991.
[60] A.W. Williams and R.L. Probert, “A Practical Strategy for Testing Pair-Wise Coverage of Network Interfaces,” Proc. Seventh Int'l Symp. Software Reliability Eng., pp. 246-254, 1996.
[61] A.W. Williams and R.L. Probert, “A Measure for Component Interaction Test Coverage,” Proc. ACS/IEEE Int'l Conf. Computer Systems and Applications, pp. 304-311, 2001.
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool