Track: Track 1: Technical Foundations for a Post-AGI World
Keywords: peer review, AI simulation, funding mechanisms, institutional reform, scientific evaluation, human behavior modeling, grant allocation, counterfactual analysis
TL;DR: We propose using AI agents trained on historical grant review data to simulate expert panels and test counterfactual funding mechanisms in silico, offering a low-stakes laboratory for institutional innovation.
Abstract: Inter-reviewer agreement in peer review hovers around $\kappa = 0.17$, barely
above chance; a quarter of funding decisions would reverse with different
reviewers. Yet we lack methods for testing whether proposed reforms improve
outcomes, because experiments on live funding are expensive and ethically
fraught. We propose the \textit{grant simulation machine}: AI agents trained
on historical evaluation data to replicate expert panel behavior, validated
against real decisions, then used to test counterfactual funding mechanisms in
silico. Drawing on AI-based human simulation (85\% accuracy modeling
individual behavior) and a partnership providing complete
evaluation-to-outcome data, we outline a framework for comparing golden
tickets, partial lotteries, and alternative panel compositions. The approach
faces real limitations: AI may not capture emergent group dynamics, and
simulated outcomes cannot substitute for actual scientific productivity. But
it offers something currently lacking: a low-stakes laboratory for
institutional innovation.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Presenter: ~Arul_Murugan1
Format: Yes, the presenting author will attend in person if this work is accepted to the workshop.
Funding: Yes, the presenting author of this submission falls under ICLR’s funding aims, and funding would significantly impact their ability to attend the workshop in person.
Submission Number: 19
Loading