• Do more cost-effective metrics exist for accurately tracking value and monitoring progress across large, complex, and geographically distributed teams?
• How can researchers improve the accuracy, fidelity, and insight associated with development metrics?
• Can the data available to developers be put to good use?
• How can researchers demonstrate that new practices provide value and improve progress in these environments?
• Use data and experiences to demonstrate why new proposed techniques are better than the state of the practice or to illuminate how they can be improved. Having such an experiential basis for papers is probably one of the first things that our reviewers check in technical submissions.
• Be open to appropriate uses of experience reports as well as more rigorous forms of technical communication. One of the ways to help provide useful decision support to practitioners more quickly is to share stories of our experiences about what works and what doesn't in practice.
• Important considerations still exist for describing such experiences so that they're helpful reports rather than anecdotes. The Insights department has been both leading the way on this and providing mentoring.
• Treat replication as a first-class goal and repeatability as an important quality check. Research papers shouldn't be turned away for providing more data that help improve our understanding of important practices and how they work across different environments. Let's not overemphasize novelty at the expense of practical results.