Abstract: The peer-review process is currently under stress due to the increasingly large number of submissions to top-tier venues, especially in Artificial Intelligence (AI) and Machine Learning (ML). Consequently, the quality of peer reviews is under question, and dissatisfaction among authors is not uncommon but rather prominent. In this work, we propose "ReviVal" (expanded as "REVIew eVALuation"), a system to automatically grade a peer-review report for its informativeness. We define review informativeness in terms of its Exhaustiveness and Strength, where Exhaustiveness signifies how exhaustively the review covers the different sections and qualitative aspects1 of the paper and Strength signifies how sure the reviewer is of their evaluation. We train ReviVal, a multitask deep network for review informativeness prediction on the publicly available peer reviews, which we curate from the openreview2 platform. We annotate the review sentence(s) with labels for (a) which sections and (b) what quality aspects of the paper those refer. We automatically annotate our data with the reviewer’s sentiment intensity to capture the reviewer’s conviction. Our approach significantly outperforms several intuitive baselines for this novel task. To the best of our knowledge, our work is a first-of-its-kind to automatically estimate the informativeness of a peer review report.
Loading