Issue No.04 - July/August (2006 vol.23)
Published by the IEEE Computer Society
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MDT.2006.109
The need for a more accurate way to count citations.
I don't think anything in our industry would get done without deadlines. We have that block of code to write by next month, a presentation to complete for a staff meeting next Friday, and that paper to write by the ITC due date. "Schedules are for slipping," said a manager in Tracy Kidder's Soul of a New Machine, but a slip just means a new deadline. I had a manager who required a date for each action item and promise. If you pushed back hard enough, you could get him to agree to a deadline to provide the deadline. It was annoying but effective—to paraphrase Dr. Johnson, a deadline concentrates the mind wonderfully.
Deadlines exist at home also, and they are just as effective. I own way too many books, many of which I haven't read, and I'm not making good progress on reading them. But when I get books from the library, with the due dates staring me in the face, somehow the library books get read. Sometimes I feel like I should borrow some of the books I own from the library so I'll get around to them.
The same goes for DVDs rented from Netflix. While there is no due date per se, my queue length gets my attention. So, I watch the DVDs that come in the mail before I get to the ones I own.
One thing about Netflix: Their process lets them figure out which DVDs in their stock get watched and which don't.
This process got me thinking about technical papers. We read papers under deadline also. While I'll read papers that are directly relevant to my work right away, I add most to a big pile. The exceptions are papers I need to review—these get immediate attention.
Academics live by publishing. The assumption is that the act of publishing implies the act of reading. I'm not sure that's true. I have heard some journals referred to as "write only." They have small circulations, and, except for a small number of researchers working in a tiny area, most of the papers don't get read. These journals are often the ones with the highest prestige.
Perhaps citation counts would be a better indicator. But I'm not sure referencing a paper really means that it has been read. I once published a survey paper that was referenced a very satisfying number of times. I'm convinced, though, that most of those researchers referencing the paper did so to avoid including references to the many papers I referenced. Maybe they read the relevant sections of my paper, but not the whole thing.
If we really want an accurate count of what papers get read, we need an IEEE version of Netflix. Publish the abstracts, make them available online, and require users to set up a queue of the papers they want. People wouldn't have to return them, but they also wouldn't get the next until they at least looked at the paper at the top of their queue. Authors could get quarterly reports on how many people checked out their papers, as well as the time between checking it out and checking out the next. Thirty seconds might mean that someone was gaming the system by checking out lots of papers at once; five minutes might mean a quick glance found it not very useful. Maybe we could even convince universities and conferences to buy ads to encourage looking at their papers, and thus make the IEEE some money.
NetPapers might be just the thing we need to figure out who is really reading this stuff.