The Community for Technology Leaders
2015 IEEE/ACM 10th International Workshop on Automation of Software Test (AST) (2015)
Florence, Italy
May 23, 2015 to May 24, 2015
ISBN: 978-1-4673-7022-6
pp: 80-84
ABSTRACT
Software testability is the degree of difficulty to test a program. Code visibility is important to support design principles, such as information hiding. It is widely believed that code visibility has effects on testability. However, little empirical evidence has been shown to clarify whether and how software testability is influenced by code visibility. We have performed an empirical study to shed light on this problem. Our study focuses on test code coverage, in particular that of automatic testing tools. Code coverage is commonly used for various purposes, such as evaluating test adequacy, assessing test quality, and analyzing testability. Our study uses code coverage as the concrete measurement of testability. By analyzing code coverage of two state-of-the-art tools, in comparison with that of developer-written tests, we have discovered that code visibility does not necessarily have effects on its code coverage, but significantly affects automatic testing tools. Low code visibility often leads to low code coverage for automatic tools. In addition, different treatments on code visibility can result in significant differences in overall code coverage for automatic tools. Using a tool enhancement specific to code visibility, we demonstrate the great potential to improve existing tools.
INDEX TERMS
Automatic testing, Computer bugs, Java, Software, Runtime, Indexes
CITATION

L. Ma, C. Zhang, B. Yu and H. Sato, "An Empirical Study on Effects of Code Visibility on Code Coverage of Software Testing," 2015 IEEE/ACM 10th International Workshop on Automation of Software Test (AST), Florence, Italy, 2015, pp. 80-84.
doi:10.1109/AST.2015.23
92 ms
(Ver 3.3 (11022016))