Keywords: OpenReview system, peer review, latent causal model, causal representation learning
TL;DR: This paper analyzed comprehensive data from the OpenReview system to examine how rebuttal strategy causally influence reviewer rating changes.
Abstract: The peer review process is central to scientific publishing, with the rebuttal phase offering authors a critical opportunity to address reviewers' concerns. Yet the causal mechanisms underlying rebuttal effectiveness, particularly how author responses influence final review decisions, remain unclear. In this work, we study rebuttal effectiveness through a two-layer causal analysis of ICLR submissions collected from the OpenReview system. At the structured level, we construct both metadata features (e.g., soundness, presentation) and LLM-inferred features (e.g., clarity, directness), and apply a suite of independence tests to uncover systematic associations with post-rebuttal rating changes. At the unstructured level, we model rebuttal text using a weakly supervised Causal Representation Learning (CRL) framework, where review-related features serve as concept-level supervision. Theoretically, we establish identifiability conditions for recovering human-interpretable latent features under mild assumptions. Empirically, our results uncover complementary causal patterns across structured and unstructured features, highlighting how specific rebuttal strategies shape reviewer assessments. These findings provide actionable guidance for authors in crafting more effective rebuttals, while offering broader implications for transparency, fairness, and efficiency in the peer review.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 9704
Loading