DocVXQA: Context-Aware Visual Explanations for Document Question Answering

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
Abstract: We propose **DocVXQA**, a novel framework for visually self-explainable document question answering, where the goal is not only to produce accurate answers to questions but also to learn visual heatmaps that highlight critical regions, offering interpretable justifications for the model decision. To integrate explanations into the learning process, we quantitatively formulate explainability principles as explicit learning criteria. Unlike conventional relevance map methods that solely emphasize regions relevant to the answer, our context-aware DocVXQA delivers explanations that are contextually sufficient yet representation-efficient. This fosters user trust while achieving a balance between predictive performance and interpretability in document visual question answering applications. Extensive experiments, including human evaluation, provide strong evidence supporting the effectiveness of our method.
Lay Summary: When people use document visual question answering (DocVQA) systems to ask questions about their documents — like forms, invoices, or reports — the AI often returns answers without any explanation. This lack of transparency makes it hard to trust the results, especially in sensitive scenarios like business or legal documents, where understanding *why* an answer was given is crucial. We created **DocVXQA**, an AI system that doesn’t just answer questions about documents, but also shows why it gave that answer. It highlights the most important parts of the document, known as *relevant regions*. Thus, users can understand the reasoning behind the answers. What makes our approach stand is that we trained the AI to value explanations as part of its learning process. Instead of just focusing on the answer, it learns to find the smallest and most meaningful parts of the document that justify the response. We tested our system thoroughly, including with real people, and found that it not only performs well but is also easier to trust and understand.
Link To Code: https://github.com/dali92002/DocVXQA
Primary Area: Applications->Computer Vision
Keywords: Document Intelligence, Explainability, Self-explainable Deep Learning, Vision and Language
Submission Number: 4596
Loading