Issue No. 05 - September/October (2010 vol. 36)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TSE.2010.28
Stephen MacDonell , Auckland University of Technology, Auckland
Martin Shepperd , Brunel University, West London
Barbara Kitchenham , Keele University, Keele
Emilia Mendes , The University of Auckland, Auckland
BACKGROUND—The systematic review is becoming a more commonly employed research instrument in empirical software engineering. Before undue reliance is placed on the outcomes of such reviews it would seem useful to consider the robustness of the approach in this particular research context. OBJECTIVE—The aim of this study is to assess the reliability of systematic reviews as a research instrument. In particular, we wish to investigate the consistency of process and the stability of outcomes. METHOD—We compare the results of two independent reviews undertaken with a common research question. RESULTS—The two reviews find similar answers to the research question, although the means of arriving at those answers vary. CONCLUSIONS—In addressing a well-bounded research question, groups of researchers with similar domain experience can arrive at the same review outcomes, even though they may do so in different ways. This provides evidence that, in this context at least, the systematic review is a robust research method.
Empirical software engineering, meta-analysis, systematic review, cost estimation.
E. Mendes, M. Shepperd, B. Kitchenham and S. MacDonell, "How Reliable Are Systematic Reviews in Empirical Software Engineering?," in IEEE Transactions on Software Engineering, vol. 36, no. , pp. 676-687, 2010.