Keywords: position paper, ML review, conference mechanism
TL;DR: We argue a credit system would promote better ML reviews.
Abstract: With soaring submission counts, stricter reciprocity review policies, widespread adoption of platforms like OpenReview, and without the offsetting pressure of publication fees, the machine learning (ML) community has one of the largest scholarly presences among all scientific fields. And yet, almost *everyone* has *many* unpleasant things to share about their review experience. Worse, there is little public space to seriously discuss — let alone debate — what makes a review system effective or how it might be improved.
In this position paper, we expand our discussion from the two core problems: *How can we reasonably limit the number of submissions?* and *How can we incentivize good and discourage bad review practices?* We first assess the strengths and shortcomings of existing attempts to address such problems. Specifically, we present five takes on some popular conference mechanisms and propose two alternative designs for improvement.
Our general position is that meaningful improvement in ML peer review won’t come from polite best-practice suggestions tucked into Calls for Papers or Reviewer Guidelines — it requires **enforceable yet fine-grained procedural safeguards** paired with **a currency-like credit system (what we call *OpenReview Points*)**. ML practitioners can “earn” such points by contributing good review practices, and “spend” them across one or multiple major conferences to redeem different kinds of “perks” — such as complimentary registration or the right to request additional review resources.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 2457
Loading