Keywords: Multimodal Large Language Model, Uncertainty Quantification, Model Merging
Abstract: Multimodal Large Language Models (MLLMs) have gained increasing popularity as a promising framework for leveraging the strong language reasoning capabilities in the vision-language domain. Given a wide range of MLLMs, model merging potentially offers a cheap way to aggregate their diverse knowledge into a single MLLM. However, directly plug-in existing model merging approaches often leads to suboptimal performance due to ($1$) inclusion of harmful models that have over-confident predictions in the target task; ($2$) the lack of specialized designs for vision-language tasks. To tackl these pain points, we conduct pioneering investigations to dissect the merging procedures and propose an uncertainty-guided MLLM merging algorithm, $\textit{i.e.}$, $\texttt{UQ-Merge}$, which $i)$ identifies beneficial candidates for merging, $ii)$ determines the merge order and the number of helpful candidates, and $iii)$ performs appropriate merging. Within our framework, we consider uncertainty quantification on both text and vision inputs to examine the MLLM prediction confidence, and then decide whether and when a MLLM needs to be included. It is worth mentioning that our vision-language uncertainty quantification does not require access to sample labels, making it more practical in various scenarios. Extensive experiments consistently demonstrate the superior MLLM merging performance of $\texttt{UQ-Merge}$ in both held-in and held-out vision-language benchmarks. For example, compared to existing state-of-the-art merging methods, $\texttt{UQ-Merge}$ brings substantial performance improvements of up to $44.3\%$ on average accuracy in $12$ datasets. Codes are available at https://anonymous.4open.science/r/UQ-Merge-7CD7.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5724
Loading