Keywords: Vision-Language Models, Visual Reasoning, Reinforcement Learning, Data Augmentation
TL;DR: NoisyRollout boosts VLM reasoning by mixing clean and noisy inputs during RL, improving generalization with no extra cost.
Abstract: Recent advances in reinforcement learning (RL) have strengthened the reasoning capabilities of vision-language models (VLMs). However, enhancing policy exploration to better scale test-time compute remains largely underexplored. In addition, VLMs continue to struggle with imperfect visual perception, which in turn affects the subsequent reasoning process. To this end, we propose **NoisyRollout**, a simple yet effective data augmentation method that mixes trajectories from both clean and moderately distorted images during RL training. By injecting targeted diversity in visual perception and the resulting reasoning patterns, NoisyRollout promotes better policy exploration through vision-oriented inductive biases, ultimately leading to more robust reasoning behaviors. We further adopt a noise annealing schedule that gradually reduces distortion strength over training, leveraging noisy signals early on while ensuring training stability in later stages. Crucially, our method is easy-to-adopt—**requiring no additional training cost and no modifications to the RL objective**. Extensive experiments on distinct training datasets demonstrate that NoisyRollout achieves state-of-the-art performance among open-source RL-tuned models across out-of-domain reasoning and perception benchmarks. Furthermore, we validate the effectiveness of NoisyRollout across model sizes (7B and 32B) and data scales (from 1K to 6K), highlighting its generalizability and scalability.
Submission Number: 17
Loading