Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards

Published: 01 May 2025, Last Modified: 23 Jul 2025ICML 2025 Position Paper Track oralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Transform the traditional one-way review system into a bi-directional feedback loop where authors evaluate review quality and reviewers earn formal accreditation
Abstract: The peer review process in major artificial intelligence (AI) conferences faces unprecedented challenges with the surge of paper submissions (exceeding 10,000 submissions per venue), accompanied by growing concerns over review quality and reviewer responsibility. This position paper argues for **the need to transform the traditional one-way review system into a bi-directional feedback loop where authors evaluate review quality and reviewers earn formal accreditation, creating an accountability framework that promotes a sustainable, high-quality peer review system.** The current review system can be viewed as an interaction between three parties: the authors, reviewers, and system (i.e., conference), where we posit that all three parties share responsibility for the current problems. However, issues with authors can only be addressed through policy enforcement and detection tools, and ethical concerns can only be corrected through self-reflection. As such, this paper focuses on reforming reviewer accountability with systematic rewards through two key mechanisms: (1) a two-stage bi-directional review system that allows authors to evaluate reviews while minimizing retaliatory behavior, (2) a systematic reviewer reward system that incentivizes quality reviewing. We ask for the community's strong interest in these problems and the reforms that are needed to enhance the peer review process.
Lay Summary: Currently, most AI-related conferences (e.g., ICML, NeurIPS, ICLR, KDD) utilize the OpenReview system for peer reviewing, where 4-6 fellow researchers (i.e., reviewers) are assigned to a paper as judges. These judges provide reviews and scoring for the papers. Authors have the chance to correct misconceptions through rebuttals during the discussion phase. A decision (acceptance or rejection) on the paper is made based on the reviews and scores by the meta-reviewer (head of reviewers). However, over the past few years, complaints about peer review quality have increased (e.g., most likely due to the rising volume of paper submissions and LLM advancement). There are concerning signs that some reviewers may be using LLMs to generate reviews, resulting in superficial or irresponsible feedback that frustrates authors. Unfortunately, authors do not have effective defense mechanisms against these irresponsible reviews in the current review process. This trend poses a serious threat to AI conferences, as it undermines the reputation and credibility of papers accepted at these prestigious venues. In this position paper, we argue that we should **(1) implement an author feedback system, where authors can rate reviewers based on review quality and paper comprehension, and (2) provide short-term and long-term motivations to reviewers using digital badges and reviewer impact scores.** We also propose several detailed implementation strategies to protect both authors and reviewers. We hope that our position paper will generate strong interest within the community regarding these problems and the reforms needed to enhance the peer review process.
Verify Author Names: My co-authors have confirmed that their names are spelled correctly both on OpenReview and in the camera-ready PDF. (If needed, please update ‘Preferred Name’ in OpenReview to match the PDF.)
No Additional Revisions: I understand that after the May 29 deadline, the camera-ready submission cannot be revised before the conference. I have verified with all authors that they approve of this version.
Pdf Appendices: My camera-ready PDF file contains both the main text (not exceeding the page limits) and all appendices that I wish to include. I understand that any other supplementary material (e.g., separate files previously uploaded to OpenReview) will not be visible in the PMLR proceedings.
Latest Style File: I have compiled the camera ready paper with the latest ICML2025 style files <https://media.icml.cc/Conferences/ICML2025/Styles/icml2025.zip> and the compiled PDF includes an unnumbered Impact Statement section.
Paper Verification Code: NGYxY
Permissions Form: pdf
Primary Area: Other topic (use sparingly and specify relevant keywords)
Keywords: Peer Review System, AI Conferences, Author Feedback, Reviewer Reward
Submission Number: 53
Loading