Re-ranking Reasoning Context with Tree Search Makes Large Vision-Language Models Stronger

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose RCTS, a multimodal RAG framework that enhances LVLMs for VQA tasks by integrating a reasoning-context-enriched knowledge base and tree-search re-ranking, achieving state-of-the-art performance.
Abstract: Recent advancements in Large Vision Language Models (LVLMs) have significantly improved performance in Visual Question Answering (VQA) tasks through multimodal Retrieval-Augmented Generation (RAG). However, existing methods still face challenges, such as the scarcity of knowledge with reasoning examples and erratic responses from retrieved knowledge. To address these issues, in this study, we propose a multimodal RAG framework, termed RCTS, which enhances LVLMs by constructing a Reasoning Context-enriched knowledge base and a Tree Search re-ranking method. Specifically, we introduce a self-consistent evaluation mechanism to enrich the knowledge base with intrinsic reasoning patterns. We further propose a Monte Carlo Tree Search with Heuristic Rewards (MCTS-HR) to prioritize the most relevant examples. This ensures that LVLMs can leverage high-quality contextual reasoning for better and more consistent responses. Extensive experiments demonstrate that our framework achieves state-of-the-art performance on multiple VQA datasets, significantly outperforming In-Context Learning (ICL) and Vanilla-RAG methods. It highlights the effectiveness of our knowledge base and re-ranking method in improving LVLMs.
Lay Summary: Visual Question Answering systems, which answer questions about images, often struggle when they lack enough examples showing how to reason through complex questions. Even when they find relevant examples, their answers can be inconsistent or unreliable. To solve this, we developed a new framework called RCTS that helps AI models better understand and use existing knowledge. Our method builds a richer knowledge base by identifying and reinforcing consistent reasoning patterns. We also introduced a smart search technique, inspired by game-playing strategies, to pick the most helpful examples for answering each question. This approach significantly improves the accuracy and consistency of AI-generated answers on a variety of image-based question-answering tasks. Our results show that RCTS outperforms current leading methods, offering a promising step forward in making AI systems more reliable when interpreting visual content and responding to natural language questions.
Link To Code: https://github.com/yannqi/RCTS-RAG
Primary Area: Deep Learning->Algorithms
Keywords: Large Vision Language Model, Multimodal Retrieval-Augmented Generation, In-context Learning, Monte Carlo Tree Search
Submission Number: 3925
Loading