The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - January/February (2009 vol.35)
pp: 29-45
Sebastian Elbaum , University of Nebraska, Lincoln
Hui Nee Chin , University of Nebraska, Lincoln
Matthew B. Dwyer , University of Nebraska, Lincoln
Matthew Jorde , University of Nebraska, Lincoln
ABSTRACT
Unit test cases are focused and efficient. System tests are effective at exercising complex usage patterns. Differential unit tests (DUT) are a hybrid of unit and system tests that exploits their strengths. They are generated by carving the system components, while executing a system test case, that influence the behavior of the target unit, and then re-assembling those components so that the unit can be exercised as it was by the system test. In this paper we show that DUTs retain some of the advantages of unit tests, can be automatically generated, and have the potential for revealing faults related to intricate system executions. We present a framework for carving and replaying DUTs that accounts for a wide variety of strategies and tradeoffs, we implement an automated instance of the framework with several techniques to mitigate test cost and enhance flexibility and robustness, and we empirically assess the efficacy of carving and replaying DUTs on three software artifacts.
INDEX TERMS
Testing strategies, Test execution, Test design
CITATION
Sebastian Elbaum, Hui Nee Chin, Matthew B. Dwyer, Matthew Jorde, "Carving and Replaying Differential Unit Test Cases from System Test Cases", IEEE Transactions on Software Engineering, vol.35, no. 1, pp. 29-45, January/February 2009, doi:10.1109/TSE.2008.103
REFERENCES
[1] J. Bach, “Useful Features of a Test Automation System (Part III),” Testing Techniques Newsletter, Oct. 1996.
[2] K. Beck, Extreme Programming Explained: Embrace Change, first ed. Addison-Wesley Professional, Oct. 1999.
[3] K. Beck, Test Driven Development: By Example. Addison Wesley Longman, Nov. 2002.
[4] R. Binder, Testing Object-Oriented Systems: Models, Patterns, and Tools, chapter 18, Object Tech nologies, pp. 943-951, first ed. Addison Wesley, Oct. 1999.
[5] D. Binkley, “Semantics Guided Regression Test Cost Reduction,” IEEE Trans. Software Eng., vol. 23, no. 8, pp. 498-516, Aug. 1997.
[6] D. Binkley, R. Capellini, L. Ross Raszewski, and C. Smith, “An Implementation of and Experiment with Semantic Differencing,” Proc. IEEE Int'l Conf. Software Maintenance, pp. 82-91, Nov. 2001.
[7] C. Boyapati, S. Khurshid, and D. Marinov, “Korat: Automated Testing Based on Java Predicates,” Proc. Int'l Symp. Software Testing and Analysis, pp. 123-133, July 2002.
[8] L.C. Briand, M. Di Penta, and Y. Labiche, “Assessing and Improving State-Based Class Testing: A Series of Experiments,” IEEE Trans. Software Eng., vol. 30, no. 11, pp. 770-793, Nov. 2004.
[9] A. Carzaniga, D. Rosenblum, and A. Wolf, “Achieving Scalability and Expressiveness in an Internet-Scale Event Notification Service,” Proc. 19th Ann. ACM Symp. Principles of Distributed Computing, pp. 219-227, July 2000.
[10] Y.-F. Chen, D.S. Rosenblum, and K.-P. Vo, “TestTube: A System for Selective Regression Testing,” Proc. 16th Int'l Conf. Software Eng., pp. 211-220, May 1994.
[11] Y. Cheon and G.T. Leavens, “A Simple and Practical Approach to Unit Testing: The JML and JUnit,” Proc. 16th European Conf. Object-Oriented Programming, pp. 231-255, June 2002.
[12] H.N. Chin, S. Elbaum, M.B. Dwyer, and M. Jorde, “DUTs: Targeted Case Studies,” Technical Report TR-UNL-CSE-2007-0005, Univ. of Nebraska, Aug. 2008.
[13] C. Csallner and Y. Smaragdakis, “Jcrasher: An Automatic Robustness Tester for Java,” Software Practice and Experience, vol. 34, no. 11, pp. 1025-1050, Sept. 2004.
[14] M. Dahm and J. van Zyl, Byte Code Engineering Library, http://jakarta.apache.orgbcel/, June 2002.
[15] S. Dieckmann and U. Holzle, “A Study of the Allocation Behavior of the Specjvm98 Java Benchmark,” Proc. 13th European Conf. Object-Oriented Programming, pp. 92-115, June 1999.
[16] H. Do, S.G. Elbaum, and G. Rothermel, “Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and Its Potential Impact,” Empirical Software Eng.: An Int'l J., vol. 10, no. 4, pp. 405-435, Oct. 2005.
[17] S. Elbaum, H. Nee Chin, M.B. Dwyer, and J. Dokulil, “Carving Differential Unit Test Cases from System Test Cases,” Proc. ACM SIGSOFT Symp. Foundations of Software Eng., pp. 253-264, Nov. 2006.
[18] S. Elbaum, P. Kallakuri, A.G. Malishevsky, G. Rothermel, and S. Kanduri, “Understanding the Effects of Changes on the Cost-Effectiveness of Regression Testing Techniques,” J. Software Testing, Verification, and Reliability, vol. 13, no. 2, pp. 65-83, June 2003.
[19] E. Gamma and K. Beck, JUnit, http://sourceforge.net/projectsjunit, Dec. 2005.
[20] R. Hildebrandt and A. Zeller, “Simplifying Failure-Inducing Input,” Proc. Int'l Symp. Software Testing and Analysis, pp. 135-145, Aug. 2000.
[21] C. Jaramillo, R. Gupta, and M.L. Soffa, “Comparison Checking: An Approach to Avoid Debugging of Optimized Code,” Proc. European Software Eng. Conf./Foundations of Software Eng., pp. 268-284, Sept. 1999.
[22] JTest, Jtest Product Overview, http://www.parasoft.com/jsp/productshome.jsp?product=Jtest , Oct. 2005.
[23] H.K.N. Leung and L. White, “Insights into Regression Testing,” Proc. IEEE Int'l Conf. Software Maintenance, pp. 60-69, Oct. 1989.
[24] H.K.N. Leung and L. White, “A Study of Integration Testing and Software Regression at the Integration Level,” Proc. IEEE Int'l Conf. Software Maintenance, pp. 290-300, Nov. 1990.
[25] A.K. Onoma, W.-T. Tsai, M. Poonawala, and H. Suganuma, “Regression Testing in an Industrial Environment,” Comm. ACM, vol. 41, no. 5, pp. 81-86, May 1998.
[26] A. Orso and B. Kennedy, “Selective Capture and Replay of Program Executions,” Proc. Third Int'l Workshop Dynamic Analysis, May 2005.
[27] C. Pacheco and M.D. Ernst, “Eclat: Automatic Generation and Classification of Test Inputs,” Proc. 19th European Conf. Object-Oriented Programming, pp. 504-527, July 2005.
[28] V.P. Ranganath and J. Hatcliff, “Pruning Interference and Ready Dependence for Slicing Concurrent Java Programs,” Proc. 13th Int'l Conf. Compiler Construction, pp. 39-56, Apr. 2004.
[29] S.K. Reddy, “Carving Module Test Cases from System Test Cases: An Application to Regression Testing,” master's thesis, Dept. of Computer Science and Eng., Univ. of Nebraska, July 2004.
[30] T. Reps, T. Ball, M. Das, and J. Larus, “The Use of Program Profiling for Software Maintenance with Applications to the Year 2000 Problem,” Proc. European Software Eng. Conf./Foundations of Software Eng.), pp. 432-449, Sept. 1997.
[31] G. Rothermel, S. Elbaum, and H. Do, Software Infrastructure Repository, http://cse.unl.edu/ galileo/php/sirindex.php , Jan. 2006.
[32] G. Rothermel, S. Elbaum, A.G. Malishevsky, P. Kallakuri, and X. Qiu, “On Test Suite Composition and Cost-Effective Regression Testing,” ACM Trans. Software Eng. and Methodologies, vol. 13, no. 3, pp. 277-331, July 2004.
[33] G. Rothermel and M.J. Harrold, “Analyzing Regression Test Selection Techniques,” IEEE Trans. Software Eng., vol. 22, no. 8, pp.529-551, Aug. 1996.
[34] D. Saff, S. Artzi, J. Perkins, and M. Ernst, “Automated Test Factoring for Java,” Proc. 20th Ann. Int'l Conf. Automated Software Eng., pp. 114-123, Nov. 2005.
[35] D. Saff and M. Ernst, “Automatic Mock Object Creation for Test Factoring,” Proc. SIGPLAN/SIGSOFT Workshop Program Analysis for Software Tools and Eng., pp. 49-51, June 2004.
[36] D. Saff and M.D. Ernst, “An Experimental Evaluation of Continuous Testing During Development,” Proc. Int'l Symp. Software Testing and Analysis, pp. 76-85, July 2004.
[37] B. Weide, “Modular Regression Testing: Connections to Component-Based Software,” Proc. Fourth ICSE Workshop Component-Based Software Engineering, pp. 82-91, May 2001.
[38] E.J. Weyuker, “On Testing Non-Testable Programs,” The Computer J., vol. 25, no. 4, pp. 465-470, Nov. 1982.
[39] T. Xie and D. Notkin, “Tool-Assisted Unit-Test Generation and Selection Based on Operational Abstractions,” Automated Software Eng. J., July 2006.
[40] Xstream—1.1.2, XStream, http:/xstream.codehaus.org, Aug. 2005.
[41] G. Xu, A. Rountev, Y. Tang, and F. Qin, “Efficient Checkpointing of Java Software Using Context-Sensitive Capture and Replay,” Proc. ACM SIGSOFT Symp. Foundations of Software Eng., pp. 85-94, Oct. 2007.
19 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool