A Letter from the Editor: Avoiding Rejection
SEPTEMBER/OCTOBER 2005 (Vol. 20, No. 5) pp. 2-4
1541-1672/05/$31.00 © 2005 IEEE

Published by the IEEE Computer Society
A Letter from the Editor: Avoiding Rejection
  Article Contents  
  Cite the right literature  
  Do the right evaluation  
  Additional advice  
  Conclusion  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 








Intelligent Readers,
All too often these days I find myself writing letters to authors with the bad news that
The above-referenced manuscript, which you submitted to IEEE Intelligent Systems, has completed the review process. After carefully examining the manuscript and reviews, the editor in chief has decided that the manuscript isn't suitable for publication in IEEE Intelligent Systems, and therefore we must reject it.
I wish I never had to send such a letter, but currently we're only able to accept fewer than one in 10 of the papers submitted for regular publication (special-issue acceptance rates vary depending on topic). This means not only am I rejecting weak submissions, but all too often I must reject papers that contain some strong material but aren't quite up to the standard we need to maintain. Especially in these latter cases, I try hard to explain to the author what's needed to meet our needs.
I realized recently that I was repeating the same advice to multiple authors, so I thought that putting it in this column might be of use to those writing for this magazine (and other technical publications). This advice might also be useful to share with your graduate students or the junior colleagues you mentor—I learned it through a lot of reviews of a lot of rejected papers, and I sure wish someone had shared more of this with me earlier in my career!
Cite the right literature
First and foremost, many papers simply don't do a good job of situating the work in the greater research milieu. This can be something as egregious as having no references to other work at all or as subtle as missing a key reference. In the former case, authors have replied to my rejections complaining that, as one author put it, "comparison with existing literature also wouldn't help much because the approach of this model is very different from those discussed in the literature."
Even if this is true of a piece of novel work, the author still has the responsibility to help the reader understand why. As I responded to this author (text changed slightly to provide anonymity),
How would our readers who aren't experts in the field know this? For example, suppose one of them has heard a talk by <a researcher> about his <related> model and wants to know what's different between that work and yours. You know it's different, but on the surface, there's much that a reader could confuse. … So, you can help readers understand how your work compares and convince them that you're aware of the state of the art, so that they know you're not just reinventing something out there (it's your responsibility as author, not theirs as reader, to place the work in this context).
The more common case, however, especially in these days when AI has splintered into subareas with separate publications, is that the author is simply unaware of work being done elsewhere in the field, often using a different term for the algorithm or approach. "How," the poor author might ask, "am I possibly to know all the work going on in all these other parts of the field?" And that's a fair question—generally, there's no way that someone can track all the work in the field these days.
However, authors do have the responsibility to find literature relevant to their work. In particular, if another area is likely to have appropriate work, then it's mandatory that the author makes the effort to explore that literature. An author claiming his approach is "user centric" can't ignore the cognitive literature. An author who claims that a biological model of the brain inspired her work isn't free to ignore the literature in neural and neural network modeling. It's okay to miss some slightly related work in an obscure corner of the field, but it's unforgivable (and grounds for rejection) to reinvent the wheel just because you didn't find a relevant hit for "round rolling rubber" in Google Scholar.
I'm sure some of you are now confused. How can I say that our main problem is in citations, when our instructions to the author (see www.computer.org/intelligent/author.htm) ask you to limit your references to about 10 per paper or sidebar? The answer is that the key is to cite the right work, not take a shotgun approach. One common mistake, for example, is to cite a lot of your own papers. One or two self-citations can be appropriate—more than that, and you're probably citing too many. In addition, if a number of citations are to literature in a different area, then a good thing to consider is a sidebar—you can put in a couple of paragraphs about the other work with a few citations, and it becomes a useful addition to your paper. Again, the key is to ensure the citations put your work in appropriate context, not that you've mentioned every possible source.
Do the right evaluation
A close contender for the most common cause of rejection, narrowly trailing literature review flaws, is lack of evaluation. There are many different ways to evaluate a piece of research and no one-size-fits-all solution to ensuring that the work is sound. Some cases require theoretical or mathematical analysis, others require an experimental result or a user study, and sometimes all that's needed might be just a strong demonstration. Deciding which approach to use, and the key to getting your paper accepted, is simple—justify your claims.
In deciding how to evaluate your research for the paper, ensure that the paper shows you can achieve the justification you've asserted for the research. Publishing an empirical graph showing how fast your system works or delivering a proof that the mathematics is correct isn't only sometimes unnecessary but also is often insufficient. It all depends on what you're claiming your new approach can achieve. If you're claiming that your approach does something new, then all you need is a good strong demonstration that your approach can do it. If you claim your approach is superior to previous approaches, then your evaluation must prove this.
Designing an appropriate evaluation is part of the art of good science and isn't always easy—but it's always needed. The heuristic of tailoring your evaluation to your claims (or tailoring your claim to your evaluation design), however, is usually a good one. Let's take a somewhat artificial case—suppose an author claimed a major breakthrough in knowledge representation. How in the world might you evaluate such a thing?
While a true validation of this claim would be arbitrarily hard, a good model for this author might be to find some corpus of sentences or other statements about the world (and many are available) and work out how the model represents them. If the author could state in the paper that
To prove my contention, from the Such-and-Such corpus, we randomly chose 100 sentences and analyzed them according to the model. In 96 cases the mapping was trivial (see www.xxx.edu for details). The remaining four cases needed a more complex use of the model. For example, sentence 87 read "'Twas brillig, and the slithy toves / Did gyre and gimble in the wabe," which clearly isn't a standard knowledge statement. However, elsewhere in the book Through the Looking Glass, these terms are interpreted as follows: [examples]. Making these substitutions, we get the new sentence "It was four o'clock in the afternoon, and the lithe and slimy badger/lizard/corkscrew-like animals were rotating and making holes in the grass plot around a sundial," which we can easily put into this model.
Such evidence would give the reader much more confidence that the model could do what the author claims. Even if the representation couldn't handle every case, it would still clearly have a lot of coverage. Of course, a more complete evaluation of the corpus coverage, a computational mechanism for mapping from the sentences to the representation, or a comparison to other representational models applied to the same statements would be even stronger evidence. However, the key is that the evaluation fits the claim, and that's what is needed.
So, if you want to claim that your approach is the first to do something, then you need to show it can indeed do that thing (and remember to do a thorough literature search). If you want to claim your approach solves some industrial problem better than previous approaches, then you need to do a comparison in that problem space. If you claim to have a good user interface, then you need either a user study or a really solid justification from the cognitive literature. If you claim completeness, soundness, efficiency, or some other mathematical property, then a proof is a necessity.
An easy way to tell if you've accomplished this is to ensure you describe both your main claim and its validation in the paper's abstract. If your abstract says something like
We have a new model of planning that outperforms all others. We validate this by producing graphs of its performance on a few random problems and don't compare it with anything else.
then, I suspect, you have a problem. If, on the other hand, the second sentence read "We validate this with a proof that it's in a lower complexity class than any previously known algorithm," then this paper is on its way to a good review.
Additional advice
Of course, we reject papers for many other reasons. Sometimes the work is quite strong but overly technical for a general publication such as this magazine. Other times the work is directly in the scope of another IEEE publication and only minimally AI related, in which case we're probably not the right place to submit that paper. Sometimes the nature of your result makes your paper more appropriate for a specialized journal where the readers can appreciate the difficulty in getting the slight performance result you've worked so hard for. Some time spent up front making sure we're the right place to submit is well worth it and might spare you from getting that rejection letter.
Conclusion
Although this letter has focused on the negative, the bright side is that IEEE Intelligent Systems is a high-impact, exciting place to publish, as you can see by the articles in this and every issue. We work hard to treat every submission fairly, and we often end up working with authors to help them get their exciting results above our publication threshold. We know how hard you work on your papers, and we put great effort into seeing that the best of them end up in our magazine. I hope the guidelines in this letter will help you get the great work you're doing into a form we can publish—it's what we're here for!
Yours,