SELF-IMAGINE: Effective Unimodal Reasoning with Multimodal Models using Self-Imagination

Published: 11 Mar 2024, Last Modified: 15 Mar 2024LLMAgents @ ICLR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vision-Language Model, Zero-shot learning, Prompting, Visual Reasoning, Efficient Reasoning
TL;DR: Self-Imagine leverages a single VLM to extract another modality (image) for a given question and utilizes it along with the question to generate the answer.
Abstract: The potential of Vision-Language Models ({\vlm}s) often remains underutilized in handling complex text-based problems, particularly when these problems could benefit from visual representation. Resonating with humans' ability to solve complex text-based problems by (1) creating a visual diagram from the problem and (2) deducing what steps they need to take to solve it, we propose \ours. We leverage a single Vision-Language Model (\vlm) to generate a structured representation of the question using HTML, then render the HTML as an image, and finally use the same \vlm to answer the question using both the question and the image. Our approach does not require any additional training data or training. We evaluate our approach in three mathematics tasks and nine general-purpose reasoning tasks using state-of-the-art (\llava and \gemini) {\vlm}s. Our approach boosts the performance of \vlm on all math tasks (on average \gsm: +3.145\%; \asdiv: +3.25\%; \svamp: +6.90\%) and the majority of the general-purpose reasoning tasks by 3.20\% to 6.00\% on average.
Submission Number: 90
Loading