This Article 
 Bibliographic References 
 Add to: 
Computer-Mediated Group Support, Anonymity, and the Software Inspection Process: An Empirical Investigation
February 2003 (vol. 29 no. 2)
pp. 167-180

Abstract—In software inspection, a key principle endorsed by Fagan is openness. However, scholars have recently questioned the efficacy of openness. For example, some argue that ego-involvement and personality conflicts that become more transparent due to openness might impede inspection. Still others point out that familiarity and (preexisting) relationships among inspection team members negatively affect the comprehensiveness in detection of defects. This brings up concerns if the openness as originally envisioned by Fagan may in fact lead to suboptimal outcomes. As the trend towards computer-based inspection continues, we believe that anonymity could play a positive role in overcoming some of the drawbacks noted in team-based inspection. Drawing upon the literature on software inspection and group support systems, this research proposes possible influences of group member anonymity on the outcome of computer-mediated software inspection and empirically examines the validity of the posited relationships in a set of controlled laboratory experiments. Two different inspection tasks with varying levels of software code complexity are employed. While both the control groups (i.e., teams without anonymity) and treatment groups (i.e., teams with support for anonymity) consume more or less the same time in performing the inspection tasks, the treatment groups are more effective in identifying the seeded errors in the more complex task. Treatment groups also express a more positive attitude toward both code inspection tasks. The findings of the study suggest a number of directions for future research.

[1] A.F. Ackerman, L.S. Buchwald, and F.H. Lewski, "Software Inspections: An Effective Verification Process," IEEE Software, pp. 31-36, May 1989.
[2] B. Boehm, P. Grunbacher, and R.O. Briggs, “EasyWinWin: A Groupware-Supported Methodology for Requirements Negotiation,” Proc. 23rd Int'l Conf. Software Eng., pp. 720-721, 2001.
[3] L. Brothers, V. Sembugamoorthy, M. Muller, “Icicle; Groupware for Code Inspection,” Proc. Conf. Computer Supported Cooperative Work, pp. 169–181, Oct. 1990.
[4] D.T. Campbell and J.C. Stanley, Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally College Publishing Co., 1966.
[5] Codestriker, May 2002, seehttp:/
[6] J. Cohen, Statistical Power Analysis for the Behavioral Sciences. Hillsdale, N.J.: Lawrence Erlbaum Associates, 1988.
[7] T. Connolly, L. Jessup, and J. Valacich, “Effects of Anonymity and Evaluative Tone on Idea Generation in Computer-Mediated Groups,” Management Science, vol. 36, no. 6, pp. 689-703, 1990.
[8] CoReview, May 2002, see /.
[9] M. Er and A. Ng, “The Anonymity and Proximity factors in Group Decision Support Systems,” Decision Support Systems, vol. 14, no. 1, pp. 75-83, 1995.
[10] M. Fagan, “Design and Code Inspections to Reduce Errors in Program Development,” IBM Systems J., vol. 15, no. 3, pp. 182-211, 1976.
[11] M. Fagan, “Advances in Software Inspections,” IEEE Trans. Software Eng., vol. 12, no. 7, pp. 744–751, July 1986.
[12] J. Fjermestad and S.R. Hiltz, “An Assessment of Group Support Systems Experimental Research: Methodology and Results,” J. Management Information Systems, vol. 15, no. 3, pp. 7-149, Winter 1998-1999.
[13] J. George, G. Easton, J. Nunamaker, and G. Northcraft, “A Study of Collaborative Group Work With and Without Computer-Based Support,” Information Systems Research, vol. 1, no. 4, pp. 394-415, 1990.
[14] J. Gintell, J. Arnold, M. Houde, J.K. McKenney, R. McKenney, and G. Memmi, “Scrutiny: A Collaborative Inpection and Review System,” Proc. Fourth European Software Eng. Conf., Sept. 1993.
[15] A. Gopal and P. Prasad, “Understanding GDSS in Symbolic Context: Shifting the Focus from Technology to Interaction,” MIS Quarterly, vol. 24, no. 3, pp. 509-546, 2000.
[16] B. Ives and M.H. Olson, “User Involvement in Information Systems Development: A Review of Research,” Management Science, vol. 30, no. 5, pp. 586-603, 1984.
[17] L. Jessup, T. Connolly, and J. Gallagher, “The Effects of Anonymity on Group Process in an Idea-Generating Task,” MIS Quarterly, vol. 14, no. 3, pp. 313-321, 1990.
[18] L. Jessup and D. Tansik, “Decision-Making in an Automated Environment: The Effects of Anonymity and Proximity with a Group Decision Support System,” Decision Sciences, vol. 22, no. 2, pp. 266-279, 1991.
[19] L. Jessup and J. Valacich, Group Support Systems: New Perspectives. New York: Macmillan, 1993.
[20] P.M. Johnson, "An Instrumented Approach to Improving Software Quality Through Formal Technical Review," Proc. 16th Int'l Conf. Software Eng.,Sorrento, Italy, May 1994.
[21] P.M. Johnson, “Reengineering Inspection,” Comm. ACM, vol. 41, no. 2, pp. 49-52, 1998.
[22] J. Knight and E.A. Myers, "An Improved Inspection Technique," Comm. ACM, vol. 36, no. 11, pp. 51-61, Nov. 1993.
[23] O. Laitenberger and J. DeBaud, "An Encompassing Life-Cycle Centric Survey of Software Review," J. Systems and Software, vol. 50, no. 1, Jan. 2000, pp. 5-31.
[24] F. Macdonald, J. Miller, A. Brooks, M. Roper, and M. Wood, A Review of Tool Support for Software Inspection Proc. Seventh Int'l Workshop Computer-Aided Software Eng. (CASE-95), July 1995.
[25] V. Mashayekhi, W. Tsai, J. Drake, and J. Riedl, “Distributed, Collaborative Software Inspection,” IEEE Software, vol. 10, no. 5, pp. 66-75, 1993.
[26] M.C. Paulk, C.V. Weber, and B. Curtis, The Capability Maturity Model: Guidelines for Improving the Software Process. New York: Addison-Wesley, 1995.
[27] A. Porter, H. Siy, C. Toman, and L. Votta, “An Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development,” IEEE Trans. Software Eng., vol. 23, no. 6, pp. 329–346, June 1997.
[28] A.A. Porter and P.M. Johnson, “Assessing Software Review Meetings: Results of a Comparative Analysis of Two Experimental Studies,” IEEE Trans. Software Eng., vol. 23, no. 3, pp. 129-145, Mar. 1997.
[29] V.S. Rao and S. Jarvenpaa, “Computer Support of Groups: Theory-Based Models for GDSS Research,” Management Science, vol. 37, no. 10, pp. 1349-1362, 1991.
[30] ReviewPro, May 2002, seehttp://www.sdtcorp.comreviewpro.htm.
[31] C. Sauer, D.R. Jeffery, L. Land, and P. Yetton, “The Effectiveness of Software Development Technical Reviews: A Behaviorally Motivated Program of Research,” IEEE Trans. Software Eng., vol. 26, no. 1, pp. 1-14, 2000.
[32] C.B. Seaman and V.R. Basili, “Communication and Organization: An Empirical Study of Discussion in Inspection Meetings,” IEEE Trans. Software Eng., vol. 24, no. 7, pp. 559–572, July 1998.
[33] M. Stein, J. Riedl, S.J. Harner, and V. Mashayekhi, “A Case Study of Distributed, Asynchronous Software Inspection,” Proc. Int'l Conf. Software Eng., pp. 107-117, 1997.
[34] S. Strauss and R. Ebenau, Software Inspection Process. McGraw-Hill, 1994.
[35] J. Valacich, A. Dennis, and J. Nunamaker, “Group Size and Anonymity Effects on Computer Mediated Idea Generation,” Small Group Research, vol. 23, no. 1, pp. 49-73, 1992.
[36] M. van Genuchten et al., "Industrial Experience in Using Group Support Systems for Software Inspections," IEEE Software, vol. 18, no. 3, May/June 2001, pp. 60-65.
[37] L.G. Votta, "Does Every Inspection Need a Meeting?" ACM Software Eng. Notes, vol. 18, no. 5, Dec. 1993, pp. 107-114.
[38] I. Zigurs and K. Kozar, “An Exploratory Study of Roles in Computer-Supported Groups,” MIS Quarterly, vol. 18, no. 3, pp. 277-297, 1994.

Index Terms:
Anonymity, controlled experiment design, group support systems, seeded errors, software inspection, software quality assurance.
Padmal Vitharana, K. Ramamurthy, "Computer-Mediated Group Support, Anonymity, and the Software Inspection Process: An Empirical Investigation," IEEE Transactions on Software Engineering, vol. 29, no. 2, pp. 167-180, Feb. 2003, doi:10.1109/TSE.2003.1178054
Usage of this product signifies your acceptance of the Terms of Use.