Vulnerability of Text-Matching in ML/AI Conference Reviewer Assignments to Collusions

Published: 09 Jun 2025, Last Modified: 14 Jul 2025CODEML@ICML25EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Peer Review, Reviewer Assignments, Adversarial Attacks, OpenReview
TL;DR: We reveal vulnerabilities in automated reviewer assignment on OpenReview and offer suggestions to enhance its robustness.
Abstract: OpenReview is an open-source platform for conference management that supports various aspects of conference peer review and is widely used by top-tier conferences in AI/ML. These conferences use automated algorithms on OpenReview to assign reviewers to paper submissions based on two factors: (1) reviewers' interests, indicated by their paper bids, and (2) domain expertise, inferred from the similarity between the text of their prior publications and submitted manuscripts. A major challenge is collusion rings, where groups of researchers manipulate the assignment process to review each other's papers positively, regardless of their actual quality. Most countermeasures target bid manipulation, assuming text similarity is secure. We demonstrate that, even without bidding, colluding authors and reviewers can exploit the text-matching component on OpenReview to be assigned to their target papers. Our results reveal specific vulnerabilities in the reviewer assignment system and offer suggestions to enhance its robustness.
Submission Number: 31
Loading