The Community for Technology Leaders
2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011) (2006)
Tokyo, Japan
Sept. 18, 2006 to Sept. 22, 2006
ISSN: 1527-1366
ISBN: 0-7695-2579-2
pp: 59-68
Michael D. Ernst , MIT, Cambridge, MA, USA
Tao Xie , North Carolina State University, Raleigh, NC, USA
Darko Marinov , University of Illinois, Urbana-Champaign, IL, USA
Marcelo d?Amorim , University of Illinois, Urbana-Champaign, IL, USA
Carlos Pacheco , MIT, Cambridge, MA, USA
ABSTRACT
Testing involves two major activities: generating test inputs and determining whether they reveal faults. Automated test generation techniques include random generation and symbolic execution. Automated test classification techniques include ones based on uncaught exceptions and violations of operational models inferred from manually provided tests. Previous research on unit testing for object-oriented programs developed three pairs of these techniques: model-based random testing, exception-based random testing, and exception-based symbolic testing. We develop a novel pair, model-based symbolic testing. We also empirically compare all four pairs of these generation and classification techniques. The results show that the pairs are complementary (i.e., reveal faults differently), with their respective strengths and weaknesses.
INDEX TERMS
null
CITATION
Michael D. Ernst, Tao Xie, Darko Marinov, Marcelo d?Amorim, Carlos Pacheco, "An Empirical Comparison of Automated Generation and Classification Techniques for Object-Oriented Unit Testing", 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011), vol. 00, no. , pp. 59-68, 2006, doi:10.1109/ASE.2006.13
91 ms
(Ver )