Position: The Artificial Intelligence and Machine Learning Community Should Adopt a More Transparent and Regulated Peer Review Process
TL;DR: We analyze peer review practices in AI/ML and show that open reviews foster greater trust and engagement. Our platform, Paper Copilot, highlights the need for more transparent and consistent reviewing to support the global research community.
Abstract: The rapid growth of submissions to top-tier Artificial Intelligence (AI) and Machine Learning (ML) conferences has prompted many venues to transition from closed to open review platforms. Some have fully embraced open peer reviews, allowing public visibility throughout the process, while others adopt hybrid approaches, such as releasing reviews only after final decisions or keeping reviews private despite using open peer review systems. In this work, we analyze the strengths and limitations of these models, highlighting the growing community interest in transparent peer review. To support this discussion, we examine insights from Paper Copilot ([papercopilot.com](https://papercopilot.com/)), a website launched two years ago to aggregate and analyze AI / ML conference data while engaging a global audience. The site has attracted over 200,000 early-career researchers, particularly those aged 18–34 from 177 countries, many of whom are actively engaged in the peer review process. \textit{Drawing on our findings, this position paper advocates for a more transparent, open, and well-regulated peer review aiming to foster greater community involvement and propel advancements in the field.
Lay Summary: The way research papers are reviewed at top AI and machine learning conferences has a big impact on what ideas are shared and recognized. But the current review system is facing serious problems: there are too many papers, reviews are often hidden from public view, and decisions can feel inconsistent or unfair—especially to early-career researchers.
To better understand and improve this process, we created Paper Copilot, a website that tracks and visualizes how papers are reviewed across conferences. Since its launch, over 200,000 people from 177 countries have used it—mostly young researchers. We found that conferences with more open and transparent review systems attract much more community interest and trust. These systems also lead to more thoughtful discussions between reviewers and authors.
Our research shows that making the review process more open—while still protecting privacy—could lead to fairer, more rigorous evaluations. We argue that the AI research community should adopt more consistent and transparent peer review practices, not just to improve fairness, but to better serve the global community pushing this field forward.
Verify Author Names: My co-authors have confirmed that their names are spelled correctly both on OpenReview and in the camera-ready PDF. (If needed, please update ‘Preferred Name’ in OpenReview to match the PDF.)
No Additional Revisions: I understand that after the May 29 deadline, the camera-ready submission cannot be revised before the conference. I have verified with all authors that they approve of this version.
Pdf Appendices: My camera-ready PDF file contains both the main text (not exceeding the page limits) and all appendices that I wish to include. I understand that any other supplementary material (e.g., separate files previously uploaded to OpenReview) will not be visible in the PMLR proceedings.
Latest Style File: I have compiled the camera ready paper with the latest ICML2025 style files <https://media.icml.cc/Conferences/ICML2025/Styles/icml2025.zip> and the compiled PDF includes an unnumbered Impact Statement section.
Paper Verification Code: M2RmM
Link To Code: https://papercopilot.com/
Permissions Form: pdf
Primary Area: Social, Ethical, and Environmental Impacts
Keywords: open statistics; open peer review; ai/ml community; community interest;
Submission Number: 11
Loading