Abstract: Machine learning (ML) and artificial intelligence (AI) conferences including NeurIPS is Neural Information Processing Systems and International Conference on Machine Learning have experienced a significant decline in peer review quality in recent years. To address this growing challenge, we introduce the isotonic mechanism, a computationally efficient approach to enhancing the accuracy of noisy review scores by incorporating authors’ private assessments of their submissions. Under this mechanism, authors with multiple submissions are required to rank their papers in descending order of perceived quality. Subsequently, the raw review scores are calibrated based on this ranking to produce adjusted scores. We prove that authors are incentivized to truthfully report their rankings because doing so maximizes their expected utility, modeled as an additive convex function over the adjusted scores. Moreover, the adjusted scores are shown to be more accurate than the raw scores, with improvements being particularly significant when the noise level is high and the author has many submissions—a scenario increasingly prevalent at large-scale ML/AI conferences. We further investigate whether submission quality information beyond a simple ranking can be truthfully elicited from authors. We establish that a necessary condition for truthful elicitation is that the mechanism be based on pairwise comparisons of the author’s submissions. This result underscores the optimality of the isotonic mechanism because it elicits the most fine-grained truthful information among all mechanisms we consider. We then present several extensions, including a demonstration that the mechanism maintains truthfulness even when authors have only partial rather than complete information about their submission quality. Finally, we discuss future research directions, focusing on the practical implementation of the mechanism and the further development of a theoretical framework inspired by our mechanism. Funding: This research was supported by the National Science Foundation [Grants CCF-1934876 and CAREER DMS-1847415] and an Alfred Sloan Research Fellowship. Supplemental Material: The online appendix is available at https://doi.org/10.1287/opre.2022.0622.
External IDs:doi:10.1287/opre.2022.0622
Loading