if(rand() > 0.5) reject()

Peer-reviewing gets discussed a lot, and one of the issues with it is how much depends on the specific set of reviewers that get assigned to your paper. Since this is entirely outside an author’s control, the only thing they can do is cross their fingers and hope for sympathetic reviewers. Yet surely, if you write an outstanding paper, then it should get accepted regardless of the reviewers, shouldn’t it?

This was the question that the NIPS (Neural Information Processing Systems — one of the big, and most oddly-named, machine learning conferences) tried to answer this year by setting up a randomized trial. 10% of the papers that were submitted to the conference were reviewed by two independent sets of reviewers. Luckily for the future of peer reviewing there was some consistency; however, the agreement between the decisions of the two reviewing panels was not as great as some people expected.

This is hardly the death knell for peer reviewing, but it does raise some interesting questions. If the between-reviewer-sets variance in assessments is greater than we might think, can we correct for that? There is more data yet to be released from this experiment, and one interesting stat to look at would be whether there was more within-reviewer-set variance on the submissions that had disagreement between reviewer sets. If so, then perhaps an improved process would require unanimity among the reviewers, and have a second round of reviewing for papers where there is disagreement.

For more information on the experiment and the NIPS reviewing process, Neil Lawrence has a blog post giving the background.

Leave a Reply

Your email address will not be published. Required fields are marked *