TL;DR: Transform the traditional one-way review system into a bi-directional feedback loop where authors evaluate review quality and reviewers earn formal accreditation
Abstract: The peer review process in major artificial intelligence (AI) conferences faces unprecedented challenges with the surge of paper submissions (exceeding 10,000 submissions per venue), accompanied by growing concerns over review quality and reviewer responsibility. This position paper argues for **the need to transform the traditional one-way review system into a bi-directional feedback loop where authors evaluate review quality and reviewers earn formal accreditation, creating an accountability framework that promotes a sustainable, high-quality peer review system.** The current review system can be viewed as an interaction between three parties: the authors, reviewers, and system (i.e., conference), where we posit that all three parties share responsibility for the current problems. However, issues with authors can only be addressed through policy enforcement and detection tools, and ethical concerns can only be corrected through self-reflection. As such, this paper focuses on reforming reviewer accountability with systematic rewards through two key mechanisms: (1) a two-stage bi-directional review system that allows authors to evaluate reviews while minimizing retaliatory behavior, (2) a systematic reviewer reward system that incentivizes quality reviewing. We ask for the community's strong interest in these problems and the reforms that are needed to enhance the peer review process.
Lay Summary: Currently, most AI-related conferences (e.g., ICML, NeurIPS, ICLR, KDD) utilize the OpenReview system for peer reviewing, where 4-6 fellow researchers (i.e., reviewers) are assigned to a paper as judges. These judges provide reviews and scoring for the papers. Authors have the chance to correct misconceptions through rebuttals during the discussion phase. A decision (acceptance or rejection) on the paper is made based on the reviews and scores by the meta-reviewer (head of reviewers).
However, over the past few years, complaints about peer review quality have increased (e.g., most likely due to the rising volume of paper submissions and LLM advancement). There are concerning signs that some reviewers may be using LLMs to generate reviews, resulting in superficial or irresponsible feedback that frustrates authors. Unfortunately, authors do not have effective defense mechanisms against these irresponsible reviews in the current review process. This trend poses a serious threat to AI conferences, as it undermines the reputation and credibility of papers accepted at these prestigious venues.
In this position paper, we argue that we should **(1) implement an author feedback system, where authors can rate reviewers based on review quality and paper comprehension, and (2) provide short-term and long-term motivations to reviewers using digital badges and reviewer impact scores.** We also propose several detailed implementation strategies to protect both authors and reviewers.
We hope that our position paper will generate strong interest within the community regarding these problems and the reforms needed to enhance the peer review process.
Primary Area: Other topic (use sparingly and specify relevant keywords)
Keywords: Peer Review System, AI Conferences, Author Feedback, Reviewer Reward
Submission Number: 53
Loading