2012 20th IEEE International Requirements Engineering Conference (RE) (2012)
Chicago, IL, USA USA
Sept. 24, 2012 to Sept. 28, 2012
Wei-Keat Kong , Department of Computer Science, University of Kentucky, Lexington, KY, USA
Jane Huffman Hayes , Department of Computer Science, University of Kentucky, Lexington, KY, USA
Alex Dekhtyar , Department of Computer Science, CalPoly, San Luis Obispo, CA, USA
Olga Dekhtyar , Department of Statistics, CalPoly, San Luis Obispo, CA, USA
Human analysts working with results from automated traceability tools often make incorrect decisions that lead to lower quality final trace matrices. As the human must vet the results of trace tools for mission- and safety-critical systems, the hopes of developing expedient and accurate tracing procedures lies in understanding how analysts work with trace matrices. This paper describes a study to understand when and why humans make correct and incorrect decisions during tracing tasks through logs of analyst actions. In addition to the traditional measures of recall and precision to describe the accuracy of the results, we introduce and study new measures that focus on analyst work quality: potential recall, sensitivity, and effort distribution. We use these measures to visualize analyst progress towards the final trace matrix, identifying factors that may influence their performance and determining how actual tracing strategies, derived from analyst logs, affect results.
Tracing Strategies, Traceability, Human Factors, Performance Measures, Process Improvement
W. Kong, J. H. Hayes, A. Dekhtyar and O. Dekhtyar, "Process improvement for traceability: A study of human fallibility," 2012 20th IEEE International Requirements Engineering Conference (RE), Chicago, IL, USA USA, 2012, pp. 31-40.