CMRAG: Co-modality-based visual document retrieval and question answering

20 Sept 2025 (modified: 30 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: RAG, Visual document retrieval, Visual question answering, Co-modality-based RAG
TL;DR: Integrating co-modality information into the RAG framework to improve the retrieval and generation performance of complex visual document question-answering systems.
Abstract: Retrieval-Augmented Generation (RAG) has become a core paradigm in document question answering tasks. However, existing methods have limitations when dealing with multimodal documents: one category of methods relies on layout analysis and text extraction, which can only utilize explicit text information and struggle to capture images or unstructured content; the other category treats document segmentation as visual input and directly passes it to visual language models (VLMs) for processing, yet it ignores the semantic advantages of text, leading to suboptimal retrieval and generation results. To address these research gaps, we propose Co-Modality--based RAG (**CMRAG**) framework, which can simultaneously leverage texts and images for more accurate retrieval and generation. Our framework includes two key components: (1) a Unified Encoding Model (**UEM**) that projects queries, parsed text, and images into a shared embedding space via triplet-based training, and (2) a Unified Co-Modality--informed Retrieval (**UCMR**) method that statistically normalizes similarity scores to effectively fuse cross-modal signals. To support research in this direction, we further construct and release a large-scale triplet dataset of (query, text, image) examples. Experiments demonstrate that our proposed framework consistently outperforms single-modality--based RAG in multiple visual document question-answering (VDQA) benchmarks. The findings of this paper show that integrating co-modality information into the RAG framework in a unified manner is an effective approach to improving the performance of complex VDQA systems.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 24568
Loading