Saturday, October 13, 2012

Initial Rejection Leads to More Citations Later

You've been waiting anxiously for that email from the journal with news about your latest submission.  It finally arrives in your inbox and you open the message:  "We regret to inform you....blah, blah, blah."

Is your paper doomed to oblivion?  Not by a long shot, according to a recent study that followed the submission histories of 80,748 manuscripts submitted to 923 bioscience journals.  Vincent Calcagno and colleagues obtained information from authors to allow tracking of their papers and used the data to construct a network of manuscript trajectories and eventual citations.  Although some of their findings were unsurprising (resubmission trajectories often involved going from higher- to lower-impact journals), there was one interesting and unexpected outcome:  Those papers that were initially rejected ultimately received significantly more citations on average when they were finally published than those accepted upon first submission.

The main reason for this finding, according to the authors, is due to input from editors and reviewers on the initially rejected manuscript.  The presumably more critical reviews (which would be likely, given that the paper was rejected) led to improvements that ultimately resulted in more citations when the paper was published elsewhere.  And this result occurred regardless of whether the paper was eventually published in a lower vs. higher impact journal than the original.

Although this seems like good news for you and your just-rejected paper, the reality is that you will still need to spend time and effort to revise your manuscript, resubmit it somewhere, and then deal with a new set of reviews and editorial comments.  In some cases, your manuscript might get rejected several times before it finds a home.  During that time, which can be months or even years, your work is not being read or cited.

The paper by Calcagno et al. is one of many that try to explain citation metrics.  There have been other studies of factors influencing citations such as numbers of references in the cited paper (more references = more citations), the length of the paper's title (longer title = more citations), and open-access publications (open-access = more citations).  Such factors have been touted as ways to boost one's own citations and h-index.  However, such anticipated outcomes may not pan out because some are based on spurious bivariate relationships, not necessarily cause and effect.  I've also seen some advice about self-citations:  cite one or more of your own papers in each new work to boost overall numbers of citations.  However, most bibliometrics allow exclusion of self citations in assessing someone's citation rate.  So padding one's paper with extra references or self-citations or similar tactics likely won't work.

Similarly, even though the statistics described above suggest that those papers initially rejected eventually get more citations on average than those accepted upon first submission, it's still a better individual strategy to avoid rejection in the first place by submitting the best paper you can (which means revising many times before submission) to the most appropriate journal for your topic.  And if your paper gets rejected?  Don't give up and pay close attention to the reviewers' criticisms when making your revisions.

References:

Calcagno, V. et al.  2012.  Flows of research manuscripts among scientific journals reveal hidden submission patterns. Science online. DOI: 10.1126/science.1227833

Corbyn, Z.  2010.  An easy way to boost a paper's citations.  Nature News. doi:10.1038/news.2010.406.

Jacques, T.S. and Sebire, N.J. 2010. The impact of article titles on citation hits: an analysis of general and specialist medical journals. J R Soc Med Sh Rep. vol. 1 no. 1 2. doi:10.1258/shorts.2009.100020

MacCallum C.J., Parthasarathy H. 2006. Open access increases citation rate. PLoS Biol 4(5): e176. doi:10.1371/journal.pbio.0040176

2 comments:

Zen Faulkes said...

Note on "resubmitted papers cited more" effect! The effect is real, but tiny. See the picture here, and the huge amount of overlap between the two groups:

https://twitter.com/Graham_Coop/status/256781015136206849

They were only able to pull this out because they had a sample size in the tens of thousands. There’s almost no predictive power about what will happen to the citations of any one paper that is rejected compared if it isn't.

DrDoyenne said...

Thanks for pointing that out. There are similar and even more issues with some of the other bibliometric studies I mentioned, which is why I said that what appears to be significant for the population of many observations won't likely pan out as an individual strategy.