everyone
since 18 Jun 2025">EveryoneRevisionsBibTeXCC BY-NC 4.0
Visual language is a system of communication that conveys information through symbols, shapes, and spatial arrangements. Diagrams are a typical example of a visual language depicting complex concepts and their relationships in the form of an image. The symbolic nature of diagrams presents significant challenges for building models capable of understanding them. Yet, recent studies seem to suggest that Large Vision-Language Models (LVLMs) can even tackle complex reasoning tasks involving diagrams. In this paper, we investigate this phenomenon by developing a comprehensive test suite to evaluate the diagram comprehension capability of LVLMs. Our test suite uses a variety of questions focused on concept entities and their relationships over a set of synthetic as well as real diagrams across several domains to evaluate the recognition and reasoning abilities of models. Our evaluation of six LVLMs shows that while these models can accurately identify and reason about entities, their ability to understand relationships is notably limited. Further testing reveals that the decent performance on diagram understanding largely stems from leveraging their background knowledge as shortcuts to identify and reason about the relational information. Thus, we conclude that LVLMs have a limited capability for genuine diagram understanding, and their impressive performance in diagram reasoning is an illusion emanating from other confounding factors, such as the background knowledge in the models.
Diagrams help us understand complex ideas by using symbols and spatial layouts instead of words. While recent AI systems, called Vision-Language Models (VLMs), appear to solve diagram-related tasks impressively, our research finds this success may be misleading. We built a broad evaluation suite to test whether these models truly understand diagrams. Our results show that while VLMs are good at spotting objects in diagrams, they often fail to grasp how those objects relate to each other. Instead, they rely on memorized knowledge rather than genuinely interpreting the diagrams. This raises concerns about how well these models really "understand" visual information, and calls for more rigorous testing in future AI systems.