This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Cognitive-Based Mechanism for Constructing Software Inspection Teams
November 2004 (vol. 30 no. 11)
pp. 811-825
Software inspection is well-known as an effective means of defect detection. Nevertheless, recent research has suggested that the technique requires further development to optimize the inspection process. As the process is inherently group-based, one approach to improving performance is to attempt to minimize the commonality within the process and the group. This paper proposes an approach to add diversity into the process by using a cognitively-based team selection mechanism. The paper argues that a team with diverse information processing strategies, as defined by the selection mechanism, will maximize the number of different defects discovered.

[1] G. Tassey, “The Economic Impacts of Inadequate Infrastructure for Software Testing,” Nat'l Inst. of Standards and Technology Report, 2002.
[2] M.E. Fagan, “Design and Code Inspections to Reduce Errors in Program Development,” IBM Systems J., vol. 15, no. 3, pp. 182-211, 1976.
[3] M.E. Fagan, “Advances in Software Inspection,” IEEE Trans. Software Eng., vol. 12, no. 7, pp. 744-751, July 1986.
[4] E.P. Doolan, “Experience with Fagan's Inspection Method,” Software Practice and Experience, vol. 22, no. 2, pp. 173-182, Feb. 1992.
[5] G.W. Russell, “Experience with Inspection in Ultra Large-Scale Developments,” IEEE Software, vol. 8, no. 1, pp. 25-31, Jan. 1991.
[6] V.R. Basili, S. Green, O. Laitenberger, F. Lanubile, F. Shull, S. Sorumgard, and M.V. Zelkowitz, “The Empirical Investigation of Perspective-Based Reading,” Int'l Software Eng. Research Network, 1996.
[7] B. Regnell, P. Runeson, and T. Thelin, “Are the Perspectives Really Different?— Further Experimentation on Scenario-Based Reading of Requirements,” Empirical Software Eng., vol. 5, no. 4, pp. 331-356, 2000.
[8] J. Carver, “The Impact of Background and Experience on Software Inspections,” PhD thesis, Univ. of Maryland, Apr. 2003.
[9] C. Jung, The Portable Jung. J. Campbell, ed., Hammondsworth, Penguin, 1971.
[10] J.K. DiTiberio and A.L. Hammer, Introduction to Type in College. Consulting Psychologists Press, 1993.
[11] I. Briggs Myers and P.B. Myers, Gifts Differing: Understanding Personality Type. Consulting Psychologists Press, 1995.
[12] A. Furnham, C. Jackson, and T. Miller, “Personality, Learning Style and Work Performance,” Personality and Individual Differences, vol. 27, pp. 1113-1122, 1999.
[13] P.C. Nutt, Making Tough Decisions. Jossey-Bass, 1989.
[14] I.B. Myers and M.H. McCaulley, Manual: A Guide to the Development and Use of Myers-Briggs Type Indicator. Consulting Psychologists Press, 1985.
[15] J. Doyle, M.J. Radzicki, A. Rose, and W. Trees, “Using Cognitive Styles Typology to Explain Individual Differences in Decision Making,” Worcester Polytechnic Inst. Research Report No. 13, 1998.
[16] D. Poirer, “Presentation at ‘Growing With Type’,” 1998, the table from this presentation is available at: http://www.infj.org typestats.html.
[17] R.O. Mason and I.I. Mitroff, “A Program for Research on Management Information Systems,” Management Science, vol. 19, no. 5, pp. 475-487, Jan. 1973.
[18] R.E. Hunt, F.J. Krzystofiak, and A.M. Yousry, “Cognitive Style and Decision Making,” Organisational Behaviour and Human Decision Processes, vol. 44, pp. 436-453, 1989.
[19] T.A. Maxwell, “Decisions: Cognitive Style, Mental Models and Task Performance,” PhD Dissertation, Rockefeller College of Public Affairs and Policy, 1995.
[20] C.W. Allinson and J. Hayes, “Validity of the Learning Styles Questionnaire,” Psychological Reports, vol. 67, no. 3, pp. 859-866, 1990.
[21] D.L. Davis, S.L. Grove, and P.A. Knowles, “An Experimental Application of Personality Type as an Analogue for Decision-Making Style,” Psychological Reports, vol. 66, pp. 167-175, 1990.
[22] J.E. McGrath, Groups, Interaction and Performance. Englewood Cliffs, N.J.: Prentice-Hall, 1984.
[23] Z. Yin, “Subjective Estimation and Team Selection in Software Inspection,” MSc thesis, Dept. Electrical and Computer Eng., Univ. of Alberta, available from: http://www.steam.ualberta.ca/main/research_areas Appendices.htm, 2003.
[24] T. Heberlein and R. Baumgartner, “Factors Affecting Response Rates to Mailed Questionnaires: A Quantitative Analysis of the Published Literature,” Am. Sociological Rev., vol. 43, pp. 447-462, 1978.
[25] F.J. Yammarino, S.J. Skinner, and T.L. Childers, “Understanding Mail Survey Response Behaviour: A Meta-Analysis,” Public Opinion Quarterly, vol. 55, pp. 613-639, 1991.
[26] B. Burchell and C. Marsh, “The Effect of Questionnaire Length on Survey Response,” Quality and Quantity, vol. 26, pp. 233-244, 1992.
[27] J.G. Helgeson and M.L. Ursic, “The Role of Affective and Cognitive Decision-Making Processes During Questionnaire Completion,” Public Opinion Quarterly, vol. 11, no. 5, pp. 493-510, 1994.
[28] A.R. Herzog and J.G. Bachman, “Effects of Questionnaire Length on Response Quality,” Public Opinion Quarterly, vol. 45, pp. 549-559, 1981.
[29] W. Applegate, J. Blass, and T. Williams, “Instruments for the Functional Assessment of Older Patients,” New England J. Medicine, vol. 322, pp. 1207-1214, 1990.
[30] G. Lawrence, People Types and Tiger Stripes, Center for Applications of Psychological Type, third ed. July 1993.
[31] L.C. Braind, K. ElEmam, B.G. Freimut, and O. Laitenberger, “A Comprehensive Evaluation for Capture-Recapture Models for Estimating Software Defect Content,” IEEE Trans. Software Eng., vol. 26, no. 6, pp. 518-540, 2000.
[32] J. Miller, “Estimating the Number of Remaining Defects after Inspection,” J. Software Testing, Verification and Reliability, vol. 9, pp. 167-189, 1999.
[33] V.R. Basili, “The Role of Experimentation in Software Engineering: Past, Current, and Future,” Proc. 18th Int'l Conf. Software Eng., pp. 442-449, 1996.
[34] J. Miller, “Applying Meta-Analytical Procedures to Software Engineering Experiments,” J. Systems and Software, vol. 54, pp. 29-39, 2000.
[35] Z. Yin, A. Dunsmore, and J. Miller, “Self-Assessment of Performance in Software Inspection Processes,” J. Information and Software Technology, vol. 46, pp. 185-194, 2004.
[36] C. Wohlin, P. Runeson, M. Host, M.C. Ohlsson, B. Regnell, and A. Wesslen, Experimentation in Software Engineering: An Introduction. Kluwer Academic, 2000.
[37] W.R. Pirie, “NPSP: A Nonparametric Statistical Package,” Dept. of Statistics and Statistical Consulting Center, Virginia Polytechnic Inst. and State Univ., Blacksburg, Va., 1983.
[38] M. Hollander and D.A. Wolfe, Nonparametric Statistical Methods. Wiley, 1973.
[39] D.H. Johnson, “The Insignificance of Statistical Significance Testing,” J. Wildlife Management, vol. 63, no. 3, pp. 763-772, 1999.
[40] R. Courtney and D. Gustafson, “Shotgun Correlations in Software Measures,” Software Eng. J., vol. 8, no. 1, pp. 5-13, 1993.
[41] T. Selke, M.J. Bayarri, and J.O. Berger, “Calibration of P-Values for Testing Precise Null Hypothesis,” Technical Report 99-13, Inst. of Statistics and Decision Sciences, Duke Univ., 1999.
[42] J. Miller, “Statistical Significance Testing— A Panacea for Software Technology Experiments,” J. Systems and Software, vol. 73, pp. 183-192, 2004.

Index Terms:
Planning for SQA and V&V, code inspections and walkthroughs, programming teams, software psychology.
Citation:
James Miller, Zhichao Yin, "A Cognitive-Based Mechanism for Constructing Software Inspection Teams," IEEE Transactions on Software Engineering, vol. 30, no. 11, pp. 811-825, Nov. 2004, doi:10.1109/TSE.2004.69
Usage of this product signifies your acceptance of the Terms of Use.