PAG: Multi-Turn Reinforced LLM Self-Correction with Policy as Generative Verifier

ICLR 2026 Conference Submission11731 Authors

18 Sept 2025 (modified: 18 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models; Self-Correction; Multi-Turn Reinforcement Learning;
Abstract: Large Language Models (LLMs) have demonstrated impressive capabilities in complex reasoning tasks, yet they still struggle to reliably verify the correctness of their own outputs. Existing solutions to this verification challenge often depend on separate verifier models or require multi-stage self-correction training pipelines, which limit scalability. In this paper, we propose Policy as Generative Verifier (PAG), a simple and effective framework that empowers LLMs to self-correct by alternating between policy and verifier roles within a unified multi-turn reinforcement learning (RL) paradigm. Distinct from prior approaches that always generate a second attempt regardless of model confidence, PAG introduces a selective revision mechanism: the model revises its answer only when its own generative verification step detects an error. This verify-then-revise workflow not only alleviates model collapse but also jointly enhances both reasoning and verification abilities. Extensive experiments across diverse reasoning benchmarks demonstrate that PAG consistently and substantially improves both direct generation accuracy and self-correction performance.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 11731
Loading