2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP) (2017)
Buenos Aires, Argentina
May 20, 2017 to May 28, 2017
Atif Memon , Dept. of Comput. Sci., Univ. of Maryland, College Park, MD, USA
Zebao Gao , Dept. of Comput. Sci., Univ. of Maryland, College Park, MD, USA
Bao Nguyen , Google Inc., Mountain View, CA, USA
Sanjeev Dhanda , Google Inc., Mountain View, CA, USA
Eric Nickell , Google Inc., Mountain View, CA, USA
Rob Siemborski , Google Inc., Mountain View, CA, USA
John Micco , Google Inc., Mountain View, CA, USA
Growth in Google's code size and feature churn rate has seen increased reliance on continuous integration (CI) and testing to maintain quality. Even with enormous resources dedicated to testing, we are unable to regression test each code change individually, resulting in increased lag time between code check-ins and test result feedback to developers. We report results of a project that aims to reduce this time by: (1) controlling test workload without compromising quality, and (2) distilling test results data to inform developers, while they write code, of the impact of their latest changes on quality. We model, empirically understand, and leverage the correlations that exist between our code, test cases, developers, programming languages, and code-change and test-execution frequencies, to improve our CI and development processes. Our findings show: very few of our tests ever fail, but those that do are generally "closer" to the code they test, certain frequently modified code and certain users/tools cause more breakages, and code recently modified by multiple developers (more than 3) breaks more often.
Google, Testing, Delays, Tools, Computer languages, Software engineering, Electronic mail
A. Memon et al., "Taming Google-scale continuous testing," 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE)(ICSE-SEIP), Buenos Aires, 2017, pp. 233-242.