Keywords: AI for Science, academic rebuttal, LLM agent
TL;DR: We propose DRPG, an four-stage agentic framework for high-quality academic rebuttal generation.
Abstract: Despite the growing adoption of large language models (LLMs) in scientific research workflows, automated support for academic rebuttal, a crucial step in academic communication and peer review, remains largely underexplored. Existing approaches typically rely on off-the-shelf LLMs or simple pipelines, which struggle with long-context understanding and often fail to produce targeted and persuasive responses. In this paper, we propose **DRPG**, an agentic framework for automatic academic rebuttal generation that operates through four steps: **D**ecompose reviews into atomic concerns, **R**etrieve relevant evidence from the paper, **P**lan rebuttal strategies, and **G**enerate responses accordingly. Notably, the Planner in DRPG reaches over 98% accuracy in identifying the most feasible rebuttal direction. Experiments on data from top-tier conferences demonstrate that DRPG significantly outperforms existing rebuttal pipelines and achieves performance beyond the average human level using only an 8B model. Our analysis further demonstrates the effectiveness of the planner design and its value in providing multi-perspective and explainable suggestions. We also showed that DRPG works well in a more complex multi-round setting. These results highlight the effectiveness of DRPG and its potential to provide high-quality rebuttal content and support the scaling of academic discussions. We will release our code through GitHub.
Submission Number: 46
Loading