When Reviewers Lock Horns: Finding Disagreements in Scientific Peer Reviews

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Information Extraction
Submission Track 2: NLP Applications
Keywords: Contradiction Detection, Peer Reviews, NLP
TL;DR: We introduce "ContraSciView", a dataset for identifying contradictions in peer reviews, and propose a baseline model for detecting these contradictions, marking the first automated effort to spot disagreements among reviewers.
Abstract: To this date, the efficacy of the scientific publishing enterprise fundamentally rests on the strength of the peer review process. The journal editor or the conference chair primarily relies on the expert reviewers' assessment, $\textit{identify points of agreement and disagreement}$ and try to reach a consensus to make a fair and informed decision on whether to accept or reject a paper. However, with the escalating number of submissions requiring review, especially in top-tier Artificial Intelligence (AI) conferences, the editor/chair, among many other works, invests a significant, sometimes stressful effort to mitigate reviewer disagreements. Here in this work, we introduce a novel task of automatically identifying contradictions among reviewers on a given article. To this end, we introduce $\textit{ContraSciView}$, a comprehensive review-pair contradiction dataset on around 8.5k papers (with around 28k review pairs containing nearly 50k review pair comments) from the open review-based ICLR and NeurIPS conferences. We further propose a baseline model that detects contradictory statements from the review pairs. To the best of our knowledge, we make the first attempt to identify disagreements among peer reviewers automatically. We make our dataset and code public for further investigations.
Submission Number: 5742
Loading