Keywords: VLM, hallucination, geometric algebra, composition, black-box
Abstract: Vision Language Models (VLMs) show impressive capabilities in integrating and reasoning with both visual and language data. But these models make mistakes. A common finding -- similar to language models -- is their tendency to hallucinate, i.e., generate plausible-sounding text that is not grounded in the visual input, or at worst, is contradictory. A growing consensus attributes this behavior to an over-reliance on language -- especially as the generation progresses, the model suffers from a ``fading memory effect'' with respect to the provided visual input. We study mechanisms by which this behavior can be controlled. Specifically, using ideas from geometric algebra and relational compositions, we propose the addition of a small, trainable module (named ReCo) on top of any VLM -- no other modification is needed. We show that such a simple/lightweight module is able to mitigate the fading memory effect on the most widely used VLMs, where we see performance improvements on multiple benchmarks. Additionally, we show that our module can be combined with many of the other approaches for reducing hallucination, where we achieve improved results for each one.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 19481
Loading