The Community for Technology Leaders
RSS Icon
Issue No.01 - Jan.-Feb. (2013 vol.30)
pp: 81-83
What works for whom, where, when, and why is the ultimate question of evidence-based software engineering. Still, the empirical research seems mostly concerned with identifying universal relationships that are independent of how work settings and other contexts interact with the processes important to software practice. Questions of “What is best?” seem to prevail. For example, “Which is better: pair or solo programming? test-first or test-last?” However, just as the question of whether a helicopter is better than a bicycle is meaningless, so are these questions because the answers depend on the settings and goals of the projects studied. Practice settings are rarely, if ever, the same. For example, the environments of software organizations differ, as do their sizes, customer types, countries or geography, and history. All these factors influence engineering practices in unique ways. Additionally, the human factors underlying the organizational culture differ from one organization to the next and also influence the way software is developed. We know these issues and the ways they interrelate are important for the successful uptake of research into practice. However, the nature of these relationships is poorly understood. Consequently, we can't a priori assume that the results of a particular study apply outside the specific context in which it was run. Here, I offer an overview of how context affects empirical research and how to better contextualize empirical evidence so that others can better understand what works for whom, where, when, and why.
software development management, organisational aspects, organizational culture, empirical evidence contextualization, evidence-based software engineering, software practice, pair programming, solo programming, test-first, test-last, practice settings, software organization environments, Software engineering, Context awareness, Information processing, Content management, empirical software engineering, software engineering
Tore Dyba, "Contextualizing empirical evidence", IEEE Software, vol.30, no. 1, pp. 81-83, Jan.-Feb. 2013, doi:10.1109/MS.2013.4
1. T. Dybå, B.A. Kitchenham, and M. J⊘rgensen, “Evidence-Based Software Engineering for Practitioners,” IEEE Software, vol. 24, no. 1, 2005, pp. 58–65.
2. T. Dybå, D.I.K. Sj⊘berg, and D.S. Cruzes, “What Works for Whom, Where, When, and Why? On the Role of Context in Empirical Software Engineering,” Proc. Int'l Symp. Empirical Software Eng. and Measurement (ESEM 12), ACM, 2012, pp. 19–28.
3. E. Chin, “Redefining 'Context' in Research on Writing,” Written Communication, vol. 11, no. 4, 1994, pp. 445–482.
4. S. Michailova, “Contextualizing in International Business Research: Why Do We Need More of It and How Can We Be Better at It?” Scandinavian J. Management, vol. 27, no. 1, 2011, pp. 129–139.
5. G. Johns, “The Essential Impact of Context on Organizational Behavior,” Academy of Management Rev., vol. 31, no. 2, 2006, pp. 386–408.
6. P. Clarke and R.V. O'Connor,“The Situational Factors That Affect the Software Development Process: Towards a Comprehensive Reference Framework,” Information and Software Tech., vol. 54, no. 5, 2012, pp. 433–447.
7. G. Bergersen et al., “Inferring Skill from Tests of Programming Performance: Combining Time and Quality,” Proc. Int'l Symp. Empirical Software Eng. and Measurement (ESEM 11), IEEE CS, 2011, pp. 305–314.
8. D.I.K. Sj⊘berg et al., “Conducting Realistic Experiments in Software Engineering,” Proc. Int'l Symp. Empirical Software Eng . (ISESE 02), IEEE CS, 2002, pp. 17–26.
9. D.I.K. Sj⊘berg, T. Dybå, and M. J⊘rgensen, “The Future of Empirical Methods in Software Engineering Research,” Proc. Future of Software Eng . (FOSE 07), IEEE CS, 2007, pp. 358–378.
10. V. Basili, F. Shull, and F. Lanubile, “Building Knowledge through Families of Experiments,” IEEE Trans. Software Eng., vol. 25, no. 4, 1999, pp. 456–473.
24 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool