Efficient Multimodal Planning Agent for Visual Question-Answering

ICLR 2026 Conference Submission16328 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: VQA, agent, multimodal
Abstract: Visual Question-Answering (VQA) is a challenging multimodal task that requires integrating visual and textual information to generate accurate responses. While multimodal Retrieval-Augmented Generation (mRAG) has shown promise in enhancing VQA systems by providing more evidence on both image and text sides, the default procedure that addresses VQA queries, especially the knowledge-intensive ones, often relies on multi-stage pipelines of mRAG with inherent dependencies. To mitigate the inefficiency limitations while maintaining VQA task performance, this paper proposes a method that trains a multimodal planning agent, dynamically decomposing the mRAG pipeline to solve the VQA task. Our method optimizes the trade-off between efficiency and effectiveness by training the agent to intelligently determine the necessity of each mRAG step. In our experiments, the agent can help reduce redundant computations, cutting search time by over 60\% compared to existing methods and decreasing costly image retrieval calls. Meanwhile, experiments demonstrate that our method outperforms all baselines, including a carefully designed prompt-based method, on average over six various datasets. Code will be released at https://github.com
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 16328
Loading