The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2009 vol.35)
pp: 368-383
Magne Jørgensen , Simula Research Laboratory and University of Oslo, Norway
Tanja M. Gruschke , KnowIT Objectnet, Oslo
ABSTRACT
Inaccurate estimates of software development effort is a frequently reported cause of IT-project failures. We report results from a study that investigated the effect of introducing lessons-learned sessions on estimation accuracy and the assessment of uncertainty. Twenty software professionals were randomly allocated to a Learning group or a Control group and instructed to estimate and complete the same five development tasks. Those in the Learning group but not those in the Control group were instructed to spend at least 30 minutes on identifying, analyzing, and summarizing their effort estimation and uncertainty assessment experience after completing each task. We found that the estimation accuracy and the realism of the uncertainty assessment were not better in the Learning group than in the Control group. A follow-up study with 83 software professionals was completed to better understand this lack of improvement from lessons-learned sessions. The follow-up study found that receiving feedback about other software professionals' estimation performance led to more realistic uncertainty assessments than receiving the same feedback of one's own estimates. Lessons-learned sessions, not only in estimation contexts, have to be carefully designed to avoid wasting resources on learning processes that stimulate rather than reduce learning biases.
INDEX TERMS
Cost estimation, process implementation and change, review and evaluation, software psychology.
CITATION
Magne Jørgensen, Tanja M. Gruschke, "The Impact of Lessons-Learned Sessions on Effort Estimation and Uncertainty Assessments", IEEE Transactions on Software Engineering, vol.35, no. 3, pp. 368-383, May/June 2009, doi:10.1109/TSE.2009.2
REFERENCES
[1] CompTIA, Survey: Poor Communication Causes Most IT Project Failures. Inadequate Resource Planning, Unrealistic Deadlines Also Cited in CompTIA Study in Computerworld, 2007.
[2] K. Moløkken and M. Jørgensen, “A Review of Software Surveys on Software Effort Estimation,” Proc. Int'l Symp. Empirical Software Eng., pp. 223-230, 2003.
[3] M. Jørgensen, K.H. Teigen, and K. Moløkken, “Better Sure than Safe? Over-Confidence in Judgement Based Software Development Effort Prediction Intervals,” J. Systems and Software, vol. 70, nos. 1/2, pp. 79-93, 2004.
[4] M. Jørgensen and M. Shepperd, “A Systematic Review of Software Cost Estimation Studies,” IEEE Trans. Software Eng., vol. 33, no. 1, pp. 33-53, Jan. 2007.
[5] M. Jørgensen, “Estimation of Software Development Work Effort: Evidence on Expert Judgment and Formal Models,” Int'l J. Forecasting, vol. 23, no. 3, pp. 449-462, 2007.
[6] M. Jørgensen, “A Review of Studies on Expert Estimation of Software Development Effort,” J. Systems and Software, vol. 70, nos.1/2, pp. 37-60, 2004.
[7] W.S. Humphrey, Introduction to the Personal Software Process. Addison-Wesley, 1996.
[8] P. Abrahamsson and K.H. Kautz, “Personal Software Process: Classroom Experiences from Finland,” Proc. European Conf. Softwre Quality, pp. 175-185, 2002.
[9] L. Prechelt and B. Unger, “An Experiment Measuring the Effects of Personal Software Process (PSP) Training,” IEEE Trans. Software Eng., vol. 27, no. 5, pp. 465-472, May 2000.
[10] V. Basili, H. Caldierea, and D. Rombach, “The Experience Factory,” Encyclopedia of Software Engineering, J.J. Marciniak, ed., pp.469-476, Wiley, 1994.
[11] S. Engelkamp, S. Hartkopf, and P. Brössler, “Project Experience Database: A Report Based on First Practical Experience,” Proc. Int'l Conf. Product Focused Software Development and Process Improvement, pp. 204-215, 2000.
[12] F. Houdek, K. Schneider, and E. Wieser, “Establishing Experience Factories at Daimler-Benz: an Experience Report,” Proc. Int'l Conf. Software Eng., pp. 443-447, 1998.
[13] M. Jørgensen, D.I.K. Sjøberg, and R. Conradi, “Reuse of Software Development Experience at Telenor Telecom Software,” Proc. European Software Process Improvement Conf., pp.10.19-10.31, 1998.
[14] A. Birk, T. Dingsøyr, and T. Stålhane, “Postmortem: Never Leave a Project without It,” IEEE Software, vol. 19, no. 3, pp. 43-45, May/June 2002.
[15] T. Dingsøyr, “Postmortem Reviews: Purpose and Approaches in Software Engineering,” Information and Software Technology, vol. 47, pp. 293-303, 2005.
[16] M.C. Ohlsson, C. Wohlin, and B. Regnell, “A Project Effort Estimation Study,” Information and Software Technology, vol. 40, no. 14, pp. 831-839, 1998.
[17] W.K. Balzer, M.E. Doherty, and R. O'Connor, “Effects of Cognitive Feedback on Performance,” Psychological Bull., vol. 106, no. 3, pp.410-433, 1989.
[18] P.G. Benson, “The Effects of Feedback and Training on the Performance of Probability Forecasters,” Int'l J. Forecasting, vol. 8, no. 4, pp. 559-573, 1992.
[19] R.E. Stone and R.B. Opel, “Training to Improve Calibration and Discrimination: The Effects of Performance and Environmental Feedback,” Organizational Behavior and Human Decision Processes, vol. 83, no. 2, pp. 282-309, 2000.
[20] N. Schmitt, B.W. Coyle, and L. King, “Feedback and Task Predictability as Determinants of Performance in Multiple Cue Probability Learning Tasks,” Organizational Behavior and Human Decision Processes, vol. 16, no. 2, pp. 388-402, 1976.
[21] M. Schindler and M.J. Eppler, “Harvesting Project Knowledge: A Review of Project Learning Methods and Success Factors,” Int'l J. Project Management, vol. 21, pp. 219-228, 2003.
[22] M. Cusomano and R. Selby, Microsoft Secrets—How the World's Most Powerful Software Company Creates Technology, Shapes Markets, and Manages People. The Free Press, 1995.
[23] M. Jørgensen, “A Critique of How We Measure and Interpret the Accuracy of Software Development Effort Estimation,” Proc. First Int'l Workshop Software Productivity Analysis and Cost Estimation, pp. 15-22, 2007.
[24] G. Pan, S.L. Pan, and M. Newman, “Information Systems Project Post-Mortem: Insights from an Attribution Perspective,” J. Am. Soc. for Information Science and Technology, vol. 58, no. 14, pp. 2255-2268, 2007.
[25] K. Lyytinen and D. Robey, “Learning Failure in Information Systems Development,” Information Systems J., vol. 9, no. 2, pp. 85-101, 1999.
[26] D. Wastell, “Learning Dysfunctions in Information Systems Development: Overcoming the Social Defenses with Transitional Objects,” MIS Quarterly, vol. 23, no. 4, pp. 581-600, 1999.
[27] M. Urban and A. Witt, “Self-Serving Biases in Group Member Attributions of Success and Failures,” J. Social Psychology, vol. 130, no. 3, pp. 417-419, 1990.
[28] M. Jørgensen and K. Moløkken-Østvold, “Reasons for Software Effort Estimation Error: Impact of Respondent Role, Information Collection Approach, and Data Analysis Method,” IEEE Trans. Software Eng., vol. 30, no. 12, pp. 993-1007, Dec. 2004.
[29] M. Jørgensen and K. Moløkken-Østvold, “Eliminating Over-Confidence in Software Development Effort Estimates,” Proc. Conf. Product Focused Software Process Improvement, pp. 174-184, 2004.
[30] M. Jorgensen, K.H. Teigen, and K. Molokken, “Better Sure than Safe? Over-Confidence in Judgement Based Software Development Effort Prediction Intervals,” J. Systems and Software, vol. 70, nos. 1/2, pp. 79-93, 2004.
[31] M. Jørgensen and D. Sjøberg, “The Importance of Not Learning from Experience,” Proc. European Software Process Improvement Conf., pp. 2.2-2.8, 2000.
[32] K.R. Hammond, Human Judgement and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. Oxford Univ. Press, 1996.
[33] B. Fischhof, “Hindsight <> Foresight: The Effect of Outcome Knowledge on Judgement under Uncertainty,” J. Experimental Psychology: Human Perception and Performance, vol. 1, pp. 288-299, 1975.
[34] D. Stahlberg et al. “We Knew It All Along: Hindsight Bias in Groups,” Organizational Behavior and Human Decision Processes, vol. 63, no. 1, pp. 46-58, 1995.
[35] C.F. Camerer and E.J. Johnson, “The Process-Performance Paradox in Expert Judgment: How Can Experts Know So Much and Predict So Badly?” Towards a General Theory of Expertise, K.A.Ericsson and J. Smith, eds., pp. 195-217, Cambridge Univ. Press, 1991.
[36] D.M. Sanbonmatsu, A.A. Sharon, and E. Biggs, “Overestimating Causality: Attributional Effects of Confirmatory Processing,” J.Personality and Social Psychology, vol. 65, no. 5, pp. 892-903, 1993.
[37] B. Brehmer, “In One Word: Not from Experience,” Acta Psychologica, vol. 45, pp. 223-241, 1980.
[38] F. Bolger and G. Wright, “Assessing the Quality of Expert Judgment: Issues and Analysis,” Decision Support Systems, vol. 11, no. 1, pp. 1-24, 1994.
[39] J. Shanteau, “Competence in Experts: The Role of Task Characteristics,” Organizational Behavior and Human Decision Processes, vol. 53, no. 2, pp. 252-266, 1992.
[40] M. Jørgensen, “Realism in Assessment of Effort Estimation Uncertainty: It Matters How You Ask,” IEEE Trans. Software Eng., vol. 30, no. 4, pp. 209-217, Apr. 2004.
[41] M. Cannon and A. Edmundson, “Failing to Learn and Learning to Fail (Intelligently): How Great Organizations Put Failure to Work to Innovate and Improve,” Long Range Planning, vol. 38, pp. 299-319, 2005.
[42] B. Collier, T. Demarco, and P. Fearey, “A Defined Process for Project Post-Mortem Review,” IEEE Software, vol. 13, no. 4, pp. 65-71, July 1996.
24 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool