Multi-Task Reinforcement Learning for Enhanced Multimodal LLM-as-a-Judge

Published: 18 Apr 2026, Last Modified: 27 Apr 2026ACL 2026 Industry Track PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: MLLM-as-a-Judge, Reinforcement Learning, Multi-task Learning
TL;DR: This paper proposes propose MT-RL-Judge, a unified multi-task reinforcement learning framework that incentivizes explicit reasoning across diverse tasks, significantly improving the reliability and out-of-domain generalization of MLLM-as-a-Judge.
Abstract: Multimodal Large Language Models (MLLMs) have been widely adopted as MLLM-as-aJudges due to their strong alignment with human judgment across various visual tasks. However, most existing judge models are optimized for single-task scenarios and struggle to generalize to diverse contexts, which is a critical requirement for reliable evaluation. To address this limitation, we propose Multi-Task Reinforcement Learning for MLLM-as-a-Judge (MT-RL-Judge), a framework that jointly optimizes the judge model across multiple tasks, leveraging the generalization capabilities of RL. Experimental results against several strong baselines demonstrate that MT-RL-Judge outperforms strong baselines in both judgment consistency and correlation with human preferences. Furthermore, our approach exhibits robust generalization on out-of-distribution tasks, further validating its effectiveness.
Submission Type: Emerging
Copyright Form: pdf
Submission Number: 438
Loading