UnAC: Evoking Complicated Multimodal Reasoning in LMMs

ACL ARR 2025 July Submission881 Authors

29 Jul 2025 (modified: 20 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The recent large multimodal models (LMMs) have demonstrated their impressive capability of image understanding. However, they still struggle to make complicated reasoning for solving a challenging multimodal problem. In this paper, we present UnAC (Understanding, Abstracting, and Checking), a novel multimodal prompting method, to synergize reasoning for complicated problems in the multimodal context of LMMs, such as GPT-4o, Gemini-1.5 and GPT-4V. To improve the understanding of the image and capture more details, we propose an adaptive visual prompting method to make LMMs able to focus on certain regions. An image abstracting prompting is designed to effectively extract information from images. Further, we propose a gradual self-checking scheme for leading to better reasoning by checking each decomposed sub-question and its answer. Extensive experiments on three public benchmarks -- MathVista, MM-Vet, and MMMU -- demonstrate the effectiveness of our method.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: Multi-model language model reasoning. LMM agents
Languages Studied: English
Submission Number: 881
Loading