FastRM: An efficient and automatic explainability framework for multimodal generative models

Published: 05 Mar 2025, Last Modified: 01 Apr 2025QUESTION PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainability, interpretability, reliable AI, LVLM
TL;DR: In this work, we propose FastRM, an effective way for predicting the explainable Relevancy Maps of LVLM models.
Abstract: Large Vision Language Models (LVLMs) have demonstrated remarkable reasoning capabilities over textual and visual inputs. However, these models remain prone to generating misinformation. Identifying and mitigating ungrounded responses is crucial for developing trustworthy AI. Traditional explainability methods such as gradient-based relevancy maps, offer insight into the decision process of models, but are often computationally expensive and unsuitable for real-time output validation. In this work, we introduce FastRM, an efficient method for predicting explainable Relevancy Maps of LVLMs. Furthermore, FastRM provides both quantitative and qualitative assessment of model confidence. Experimental results demonstrate that FastRM achieves a 99.8% reduction in computation time and a 44.4% reduction in memory footprint compared to traditional relevancy map generation. FastRM allows explainable AI to be more practical and scalable, thereby promoting its deployment in real-world applications and enabling users to more effectively evaluate the reliability of model outputs.
Submission Number: 8
Loading