The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - February (2007 vol.29)
pp: 193-194
Published by the IEEE Computer Society
Is it really about the numbers? As I write this annual editorial, my 24" monitor is filled with spreadsheets, charts, and graphs showing various metrics about the state of the IEEE Transactions on Pattern Analysis and Machine Intelligence ( TPAMI). Looking at numbers helps to answer three questions: How attractive is TPAMI to authors? How attractive are TPAMI papers to the readership? And, how effective is the TPAMI editorial and review process? The statistics revealed through this editorial provide insights, though never complete answers, to these questions.
By all accounts, the IEEE Transactions on Pattern Analysis and Machine Intelligence is doing very well. Consider, for example, the submission rate. Since 2002, the number of submissions to TPAMI has doubled with more than 900 projected for 2006, 1 a 22 percent growth rate (see Fig. 1). At this rate, when the next Editor-in-Chief (EIC) is two years into his/her term, TPAMI could have more than 2,000 submissions annually! This increase is attributable to the growth of the fields within TPAMI's scope—computer vision, pattern recognition, machine learning, biometrics, etc.—and it mirrors the growth of our premier conferences such as CVPR, ICCV, ECCV, ICPR, and NIPS.


Fig. 1. Number of papers submitted annually to TPAMI.




Last year, this editorial was centered on the implications of TPAMI's impact factor. Each year, Tomson ISI produces the Journal Citation Report, which includes a number of metrics on scholarly publications. The most widely cited metric is the impact factor, a measure of the average number of times papers published in the two previous years are referenced by journals tracked by ISI in the given year. Based on TPAMI papers published in 2003 and 2004 and referenced in 2005, TPAMI's impact factor was 3.81. Papers in TPAMI were cited 13,053 times in 2005, and the average half life was 8.7 years (the number of years going back from the current year which account for 50 percent of the total citations received by the journal during the current year). The impact factor makes TPAMI the second most cited journal in artificial intelligence, the third ranked IEEE publication, and the fourth ranked publication in electrical engineering. This represents a decrease from the impact factor of 4.32 last year when TPAMI was the most highly ranked journal in electrical engineering. As the saying goes, "When you're number one, there's only one way to go."
But, TPAMI is not about the numbers—these are a consequence, not a cause. The core values driving those of us who deeply care about TPAMI compel us to create the premier journal in computer vision, pattern recognition, and machine learning. That is a journal filled with important papers that readers find interesting, valuable, and timely; a journal to which authors choose to send their papers, not solely because the impact factor and readership are high, but because they will receive prompt, fair, and informative reviews. The vitality of TPAMI arises from both the staff at the IEEE Computer Society and the extensive and intertwined community of readers, authors, editors, and reviewers.
The Associate Editors (AEs) and reviewers are all volunteers who put in enormous effort and expertise to help select and improve the very best submissions. By back of the envelope calculations, nearly 21 person years were donated to TPAMI to review papers alone (900 papers * 3 reviews on average * 8 hours effort per review). On top of this, about 55 Associate Editors oversee the review process and make very difficult decisions from potentially contradictory reviews. The AEs have been selected based on their research achievements and sound judgment as evidenced through their experience as reviewers, on program committees, or as editors of publications with similar standards as TPAMI.
One of the major goals of all TPAMI EICs has been to reduce the time from submission to publication. I am pleased to report that the average time from a paper being submitted to a first decision is three months, and the average time from submission to a final decision is about seven months. Note that, in 2001, it took, on average, nine months for the first decision. This reduction is, in large part, due to all the messages that editors and reviewers receive from Elaine Stephenson from the Computer Society. Credit also goes to Suzanne Werner who manages the peer review process for the Transactions. The time for a first review in TPAMI now rivals the review time for our best conferences, and authors don't have to wait for the conference deadline. Once a paper is accepted, Julie Hicks, our production editor, edits the papers and produces each issue.
It should be noted that, while the number of submissions has increased so dramatically since 2002, TPAMI still only receives the support of one transactions assistant and one production editor. We really don't know how Elaine and Julie keep up with the load. Some help is coming in the form of technology. In January, a new version of Manuscript Central will be rolled out, and this is expected to clear up many of quirks and inefficiencies of the current system.
One question that authors regularly ask is, "What is the acceptance rate at TPAMI and how can I get my paper accepted?" The first part is easier to answer. The acceptance rate for papers submitted in 2004 was just under 25 percent. (Note that, unlike conferences, where all decisions are made concurrently, acceptances for journals are rolling, and we do not yet know the outcome for all papers submitted in 2005). With regard to getting a paper into TPAMI, the starting points are that the paper should be within TPAMI's scope, it should contain a significant research contribution, and this should be well articulated. While all accepted papers receive at least two independent reviews, roughly 15 of percent submitted papers are rejected by the EIC, AEIC, or Associate Editor for being outside of TPAMI's scope. Authors should look toward back issues of TPAMI to see how well their manuscript relates to those published in TPAMI or whether their paper might be a better fit with another IEEE Transactions. The border between TPAMI's scope and related journals is not sharp, and it is not always clear when a paper fits better into TPAMI or the IEEE Transactions on Image Processing, the IEEE Transactions on Neural Networks, the IEEE Transactions on Fuzzy Systems, etc. A smaller fraction of submitted papers are rejected without review by an Associate Editor based on the paper's quality. Some of these might be characterized as undergraduate projects, others are so poorly written that reviewers won't be able to evaluate the contribution, and some simply describe a straightforward application of textbook ideas. Our Associate Editors have a great deal of experience and are in a position to judge that such papers will almost certainly be rejected once reviewed. This is important because the pool of reviewers is bounded and we recognize that reviewer time is precious. While authors may feel slighted to not receive 2-3 reviews, the immediate feedback of the administrative rejection allows them to improve their paper and ultimately have a greater chance of a more timely acceptance in some other journal.
Now, how does one take a really good idea and write a paper that is likely to be accepted? While there's no single answer, consider a TPAMI Associate Editor's cogent response to an author of a recent paper
To be accepted, a paper must present a clear, self-contained, scientifically convincing case for both the validity of its results and their usefulness. It therefore has to "work" at several levels. In particular, in applied fields like computer vision and especially for incremental improvements to established techniques such as those given here, theoretical results or algorithms seldom suffice: to demonstrate that the contribution will have an impact on actual practice, comparative experiments showing a clear practical advance over existing methods and establishing the limits of validity of the new approach are typically needed. Testing is essential because nice theoretical properties are not always a good guide to practical performance. It is not enough to provide a mere record of work done: you need to convince busy, skeptical practitioners who may already have working implementations to invest valuable time in understanding and perhaps reimplementing your method. Papers are published primarily to save other people effort so the onus of establishing the qualities of your method is on you, not on your readers.
While these comments relate to a particular type of computer vision paper, similar points apply to other areas in TPAMI's scope and to papers that might be characterized as purely theoretical, such as introducing a new technique (algorithm), or as presenting a solution to a specific application.
This editorial quoted a large body of statistics concerning the quality of TPAMI, the efficiency and effectiveness of the review process, and the popularity of TPAMI with authors. The greatest challenge facing TPAMI is to maintain focus and quality in spite of the rising submission rate. While we have increased the number of Associate Editors from 47 last year (January 2006) to 59 this year (January 2007), we continually need to recruit a diverse set of experienced AEs, and our research communities need to cultivate and develop more reviewers. Nevertheless, if the submission rate continues to grow at this rate for another five years, nonincremental solutions may be needed. We welcome your suggestions about how to make a TPAMI a better (though not necessarily bigger) journal!
David J. Kriegman, Editor-in Chief
David Fleet, Associate Editor-in-Chief

For information on obtaining reprints of this article, please send e-mail to: tpami@computer.org.

1. As of 7 November 2006, 791 papers have been submitted and the projected number of submissions is 914.

34 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool