InterFeedback: Unveiling Interactive Intelligence of Large Multimodal Models with Human Feedback

Published: 06 Mar 2025, Last Modified: 21 Apr 2025ICLR 2025 Bi-Align Workshop ICLROralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Human-AI Interactions, Feedback, Large Multimodal Models, Human-in-the-loop Learning
TL;DR: Can large multimodal models evolve through interactive human feedback? We introduce a comprehensive benchmark for evaluation.
Abstract: Existing benchmarks do not test Large Language Models (LMMs) on their interactive intelligence with human users, which is vital for developing general-purpose AI assistants. We design InterFeedback, an interactive framework, which can be applied to any LMM and dataset to assess this ability autonomously. On top of this, we introduce InterFeedback-Bench, which evaluates interactive intelligence using two representative datasets, MMMU-Pro and MathVerse, to test 10 different open-source LMMs. Additionally, we present InterFeedback-Human, a newly collected dataset of 120 cases designed for manually testing interactive performance in leading models such as OpenAI-o1 and Claude-3.5-Sonnet. Our evaluation results show that state-of-the-art LMM (e.g., OpenAI-o1) can correct their results through human feedback less than 50%. Our findings point to the need for methods that can enhance LMMs' capabilities to interpret and benefit from feedback.
Submission Type: Long Paper (9 Pages)
Archival Option: This is a non-archival submission
Presentation Venue Preference: ICLR 2025
Submission Number: 43
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview