Keywords: Vision Language Models, Reinforcement Learning, Post-Training, Modular AI
TL;DR: We use clarification-aware reinforcement learning to post-train vision-language models, teaching them to communicate effectively with downstream reasoning systems.
Abstract: Recent text-only models demonstrate remarkable mathematical reasoning capabilities. Extending these to visual domains requires vision-language models to translate images into text descriptions. However, current models, trained to produce captions for human readers, often omit the precise details that reasoning systems require.
This creates an interface mismatch: reasoners often fail not due to reasoning limitations but because they lack access to critical visual information.
We propose Adaptive-Clarification Reinforcement Learning (AC-RL), which teaches vision models what information reasoners need through interaction. Our key insight is that clarification requests during training reveal information gaps; by penalizing success that requires clarification, we create pressure for comprehensive initial captions that enable the reasoner to solve the problem in a single pass.
AC-RL improves average accuracy by $4.4$ points over pretrained baselines across seven visual mathematical reasoning benchmarks, and analysis shows it would cut clarification requests by up to $39$\% if those were allowed.
By treating clarification as a form of implicit supervision, AC-RL demonstrates that vision-language interfaces can be effectively learned through interaction alone, without requiring explicit annotations.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 20285
Loading