Pages: pp. 2-4
In my inbox, I keep a folder set aside for storing particularly significant emails that I've received in my career. Many are records of past successes—awards, letters of recommendation, congratulations from people I particularly respect. These are the ones useful for hauling out on rare occasions, when the going is especially tough, in order to psych myself up again for the task at hand.
Some of the emails I keep are not as much fun to reread, but are valuable nonetheless.
One of these I received just after getting my PhD. I had a new job as a research scientist, with a grant from the US National Science Foundation safely in hand to continue research on my dissertation topic. The NSF program director had asked for success stories from the project that he could use to demonstrate the effectiveness of the research funding, and I had dutifully sent off a report on a workshop we had just held with collaborators. At the workshop, we had met to discuss a set of replicated experiments on a new development practice and had gotten our hands dirty taking a detailed look at each other's data. I was pleased that we were well on our way to compiling a body of knowledge regarding that technology, had spent some time worrying about the statistical significance (or lack thereof) across our body of studies, and had tried to reflect all of this in the success story I submitted.
An email came quickly back from the NSF program director. Judgment had been swift and decisive:
Thank you for the detailed and informative report. It is now my job to subject you to the humiliation I would be subjected to if I took this report to the upper administration in NSF or to "the Hill." As I see it, a bunch of scientists have done what scientists always do, nitpicking over statistically insignificant differences and presenting their results to others of the initiated for their comment and approval. … When I hear a researcher enthusiastic and excited about a result, I trust there is something there so that we can share the excitement. Often it is not easy to do that, however.
Direct and without any pulled punches, the letter had its intended effect. A bit dazed, I thought that of course I knew there were other stakeholders for this research. Abstracting away the statistical details, the data we were compiling were overturning some common beliefs about how best to produce software. So why not help readers see that was what we were doing? I had gotten into research because I believed it could, and should, make a difference to software developers in the field. So why had I stopped trying to make that connection?
Although the message was painful, it served—and continues to serve—as a direct reminder of the reason I'd chosen my career in the first place, but had managed to lose track of in the day-to-day grind. Writing technical papers for "the initiated," in which careful studies are elaborately described and statistical significance is an important tool, is still an important part of all researchers' lives, but this email reminded me of how much of the job goes beyond that. Having that conversation with developers and research funders, and working to get our research results actually used in practice has always been one of the most rewarding parts of the job for me.
Few venues are as important to helping that conversation happen—helping research results usefully reach the folks in the trenches who should be applying them—as IEEE Software, so I'm happy to be starting my new role as editor in chief.
For the last four years, as department editor of the Voice of Evidence (VOE) column, it's been my pleasure to collaborate with a variety of other researchers, working not only around the world but in many different subfields of software engineering. In each column, our mission was to summarize the research results across a particular research area and help abstract the key takeaway points that developers should be aware of from that research. Although I leave the VOE behind now, I'm excited that it will continue under a new set of coeditors, both veteran contributors to the VOE in past issues, who I know will take it in new and interesting directions: Helen Sharp, professor of software engineering at the Open University, London, and Tore Dybå, chief scientist at SINTEF.
In my new role as EIC, I'm looking forward to working with practitioners and making sure that we hear their valuable stories and experiences. My goal for the next few years is to continue to grow and improve the peer-reviewed content that is such an important part of the magazine, but also to provide more channels for users to comment on it, interact with the authors, and share their own experiences as they try to apply those results. Just as I believe that research isn't complete if it doesn't reach stakeholders who can benefit from it, I believe it also needs to be informed by hearing the real-life challenges and experiences of practitioners. To these ends, we're working on a number of new endeavors.
First, I'd like to welcome two new editorial board members: Thomas Zimmermann, associate EIC for development infrastructures and tools, and Tore Dybå, who in addition to coeditor of the Voice of Evidence department will also be associate EIC for empirical studies (see the "A Warm Welcome" sidebar for more details).
Also, Linda Rising will continue to helm the "Insights" department. Linda shepherds experience reports to help readers communicate their own stories about what worked (or didn't) for them. We're looking forward to many more concrete, actionable, and enjoyable stories from the trenches. If you're interested in participating, see Linda's introduction in the May/June 2010 issue for pointers.
We'll also be experimenting with new venues that enable author–reader interactions in real time. Software has begun offering opportunities for readers to post comments on articles to give you, our readers, a way to communicate with the authors and each other—questions, observations, or similar experiences that can help put the content in context. Our latest feature, by Sallyann Freudenberg and Helen Sharp, is compiling a set of the top research questions that practitioners care about; see http://computingnow.computer.org/sw/ResearchQs. Also look for "chat" experimentation that will enable our authors and guest editors to answer questions and raise ideas in real time.
Of course, even in this day and age, not every conversation has to be done online. We've been engaging with a number of conferences to encourage and foster effective mechanisms for gathering industrial experience reports. (See, for example, some of the results in the "Collaboration with SATURN" sidebar.) We have a number of additional collaborations lined up, which we'll be reporting in future issues, and our department editors and staff would enjoy talking with you at these events in person.
Finally, having lived in Washington, DC, for several years now, I have a healthy appreciation for well-developed podcasts that keep me up to date on challenging and informative topics while I sit in commuter traffic. We already offer a podcast version of the latest departments from Grady Booch and Neil Maiden (go to www.computer.org/cn/software). We have ambitious ideas for offering podcasts of additional content as well as special features like debates on timely topics, and are lining up experiments with additional delivery media as well.
With all of the above, I hope not to provide more bells and whistles, but to find ways of usefully enhancing your access to the top-notch, peer-reviewed content that Software continues to provide. If you have ideas or feedback, I'd love to hear from you—email me at email@example.com. This is going to be fun.
This erratum corrects the article "Visual Tools for Software Architecture Understanding: A Stakeholder Perspective," Alexandru C. Telea, Lucian Voinea, and Hans Sassenburg, IEEE Software, vol. 27, no. 6, pp. 46–53; http://doi.ieeecomputersociety.org/10.1109/MS.2010.115. The Rigi URL in Figure 1 should be www.rigi.csc.uvic.ca, not www.csc.rigi.uvic.ca.