Keywords: Process Reward Models, LLM reasoning, Adversarial Training
Abstract: Process Reward Models (PRMs) enhance reasoning ability of LLMs by providing step-level supervision.
However, their widespread adoption is limited due to expensive manual step-level annotation and poor generalization of static training data to novel errors.
We introduce Adversarially Trained PRMs ($\texttt{APRM}$), where
a Generator ($G$) learns to produce reasoning errors to deceive a PRM ($R$), while $R$ concurrently learns to detect them.
This interaction yields progressively harder negatives for $R$, improving it's robustness and generalization to novel errors without requiring manual step-level labels.
Averaged across diverse mathematical reasoning benchmarks, $\texttt{APRM}$ improves solver accuracy by $+3.4$ percentage points (pp) over the strongest PRM baseline. $\texttt{APRM}$ achieves gains of $+5.3$ pp on out-of-distribution tasks.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 9885
Loading