Retrieval-Augmented Perception: High-resolution Image Perception Meets Visual RAG

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose Retrieval-Augmented Perception (RAP), a training-free framework that retrieves and fuses relevant image crops while preserving spatial context, with RE-Search dynamically selecting the optimal number of crops.
Abstract: High-resolution (HR) image perception remains a key challenge in multimodal large language models (MLLMs). To drive progress beyond the limits of heuristic methods, this paper advances HR perception capabilities of MLLMs by harnessing cutting-edge long-context techniques such as retrieval-augmented generation (RAG). Towards this end, this paper presents the first study exploring the use of RAG to address HR perception challenges. Specifically, we propose Retrieval-Augmented Perception (RAP), a training-free framework that retrieves and fuses relevant image crops while preserving spatial context using the proposed Spatial-Awareness Layout. To accommodate different tasks, the proposed Retrieved-Exploration Search (RE-Search) dynamically selects the optimal number of crops based on model confidence and retrieval scores. Experimental results on HR benchmarks demonstrate the significant effectiveness of RAP, with LLaVA-v1.5-13B achieving a 43\% improvement on $V^*$ Bench and 19\% on HR-Bench. Code is available at https://github.com/DreamMr/RAP.
Lay Summary: Understanding high-resolution (HR) images is still a big challenge for Multimodal Large Language Models (MLLMs) that work with both text and images. This paper introduces a new approach to help these models better understand HR images by using advanced methods designed for handling long and complex information. We propose a method called Retrieval-Augmented Perception (RAP). Instead of looking at the whole large image at once, RAP smartly breaks the image into smaller parts (called crops) and picks the most relevant ones. It then puts them together in a way that keeps the image’s structure and context. Importantly, this method doesn’t require extra training. We also introduce RE-Search that decides how many image parts to use, depending on how confident the model is and how useful each part seems. In tests on high-resolution image tasks, our method worked well. For example, one model improved by 43% on a tough benchmark and 19% on another.
Link To Code: https://github.com/DreamMr/RAP
Primary Area: Deep Learning->Large Language Models
Keywords: Multimodal Large Language Models, High-resolution Image Perception
Submission Number: 2560
Loading