Learning from Conferences
By David Alan Grier

conference presentationThere is something in a conference that defines our field. I have younger colleagues who claim that conferences are far more valuable to their careers than publishing a paper in a journal. I am sympathetic to their claims, even though I spent years resisting them when I served as a dean.

I know that, at conferences, you get to meet your peers and trade all the ideas that aren’t easily conveyed in a paper. I also recognize that computing is a fast-moving field and that traditional journals can lag two or three or even four years behind the field. Nonetheless, something in me values the kind of discipline needed to produce a lengthy, detailed article. I often wonder if, in 100 years, people will review the record of our conferences and marvel at how such chaos could produce such powerful technology.

The fundamental unit of the conference is not the paper but the track. By themselves, individual papers tend to deal with narrower and narrower problems. This is the nature of maturing technical fields. In new fields, we identify big problems and tend to do research that influences many people. In more established fields, we tend to take smaller steps and communicate our work’s results to smaller and smaller communities.

Occasionally, this maturation process leads people to believe that a field’s founders were intellectual giants and that subsequent researchers are not as skilled or intelligent. A close examination of conference papers shows that such a claim is not true. The first researchers in a field tend to be influential—often because they defined concepts that have shaped the field. If you look at the conferences of the 1950s and 1960s, you can see examples of pioneering researchers who didn’t always understand ideas that are now commonplace or have the mathematical tools for their research. By the same argument, many current researchers have deep insights into the field and possess mathematical or research skills that are far greater than those of many of the pioneers.

A conference from the late 1960s shows that some of the early workers didn’t always grasp fundamental concepts. At one session devoted to the still-new field of software engineering, one researcher gave a very clear discussion of requirements specification—the process by which an engineer defines what a new software system should do. During the course of the presentation, he got into an argument with a well-known engineer who took objection to the talk. The well-known engineer (and I think it best that I don’t give his name) said that it would be possible to specify a program that violated the laws of arithmetic and hence require a program that produced incorrect results.

The speaker agreed that this was possible, but he also said that if a customer needed such a program, then the resulting system would be correct for that customer and for that application. At this point, the well-known engineer left the room, claiming that the session was wrong or stupid or something like.

Although the experience of the well-known but unnamed engineer reminds us that no researcher is infallible, it doesn’t really help us understand why conference papers are dealing with smaller and smaller questions. In fact, there are several reasons for this phenomenon. First, problems in mature fields tend to be harder and require more investment. Researchers need to invest more time, effort, and money into building an experimental or analytical framework, learning the basic ideas, and starting a research program. In the process, they tend to learn their lessons in a step-by-step fashion.

However, conference papers are being shaped by a more fundamental force. As the entire technical community has matured, academic institutions have started measuring the quantity of scientific research by counting the number of papers. Laboratories and even individual researchers now have the goal of producing 8, 10, or even 20 papers per year. As a long-term referee, I’ve seen increasing numbers of papers that are nearly identical. The opening paragraphs of these papers describe a common experimental framework. They differ only because each has a single unique research hypothesis and a discussion of that hypothesis.

In times past, we might have combined a half dozen of these papers into a single, richer article. However, we do not live in the past but in the present. In this world, the conference track really becomes the fundamental unit of research.

In a conference track, we are supposed to see a lot of papers and a lot of different authors commenting on a single idea. In practice, these papers usually have a common theme but present variations on that common idea. Often, each paper in a single track has a unique perspective, or a unique mathematical notation, or even a unique set of basic concepts. These papers’ differences—more than their similarities—help us understand the state of research.

At a recent conference, I followed a session about Web semantics. Most of the contributors to the track were trying to gather information about how different websites used words to determine if the sites used those words consistently. Their work was based on information theory—a field that I had once studied and hence could follow the arguments.

None of the contributors used the same notation. None of them had the same definition for information or used a common measure for the distance between words, though they all used formulae that were algebraically equivalent or nearly algebraically equivalent. None worked on a common problem.

In the end, I drew one lesson from the track. It seemed to me that all the presenters were struggling with the same problem and that none had a useful solution. The problem stems from the fact that information theory doesn’t really measure semantics. It doesn’t really come to grips with the meaning of words. The words “no” and “yes” have entirely opposite meanings but it turns out that they are very close when measured by information on their Web usage.

As I finished reviewing the material on Web semantics, I hoped to find a paper that would give some direction to the field. These papers are extremely valuable. They give a clear set of definitions and ideas that lead to interesting questions. These papers are usually written by someone who has done a lot of thinking about a field and understands the limitations of the current approaches to research. From that reflection, the writers can identify a promising set of concepts that will define the research and be able to articulate the kinds of questions that these concepts will help answer.

Such papers are as rare as they are valuable. They require a certain amount of time to write. Sadly, conferences tend to emphasize speed. Still, a good conference can demonstrate how you might write such a paper should you have the chance to reflect on all the papers that come from a single track.


David Alan Grier circle image

About David Alan Grier

David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at video.dagrier.net. He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column “The Known World.” He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at grier@computer.org.