Controlling Multimodal LLMs via Reward-guided Decoding

Published: 10 Oct 2024, Last Modified: 19 Nov 2024AFM 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multimodal learning, vision-language learning, multimodal large language models, reward modeling, decoding strategies, controllability, visual grounding
TL;DR: We propose a method for personalizing Multimodal LLMs by controlling their generation through reward-guided decoding, enabling user control over output quality and test-time compute in image captioning tasks.
Abstract: As Multimodal Large Language Models (MLLMs) gain widespread applicability, it is becoming increasingly desirable to adapt them for diverse user needs. In this paper, we study the adaptation of MLLMs through controlled decoding. To achieve this, we introduce the first method for reward-guided decoding of MLLMs and demonstrate its application in improving their visual grounding. Our method involves learning a reward model for visual grounding and using it to guide the MLLM's decoding process. Our approach enables on-the-fly controllability of an MLLM's inference process in two ways: first, by giving control over the relative importance of reward and output likelihood during decoding, allowing a user to dynamically trade off object precision and recall in image captioning tasks; second, by giving control over the breadth of the search during decoding, allowing a user to trade off compute for output quality. We evaluate our method on standard object hallucination benchmarks, showing that it provides significant controllability over MLLM inference, while matching or outperforming existing visual grounding methods.
Submission Number: 100
Loading