Keywords: Multimodal Large Language Model, Multimodal reasoning, Process reward model, Domain-reweighting, Bi-level optimization
Abstract: Extending process reward models (PRMs) to multimodal LLMs is hindered by broad domain coverage, train–test distribution shift, and severe dataset quality imbalance. We propose DreamPRM, a bi-level, domain-reweighted framework: lower-level fine-tuning learns with per-domain weights to prioritize high-quality reasoning signals, while upper-level evaluation on a meta set updates these weights via an aggregation loss. Across diverse math reasoning benchmarks, DreamPRM consistently enhances state-of-the-art MLLMs and outperforms strong baselines in data selection and test-time scaling.
Submission Number: 42
Loading