Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models

Published: 26 Sept 2024, Last Modified: 13 Nov 2024NeurIPS 2024 Track Datasets and Benchmarks PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: visual robustness; vision-language models; visual abstraction; in-context learning; multimodality; visual perception; multi-domain generalisation
TL;DR: We find that Vision-Language Models (VLMs) struggle to recognise abstract shapes represented by an arrangement of visual scene elements in images, and introduce a benchmark consisting of such images to evaluate VLM shape perception.
Abstract: Despite the importance of shape perception in human vision, early neural image classifiers relied less on shape information for object recognition than other (often spurious) features. While recent research suggests that current large Vision-Language Models (VLMs) exhibit more reliance on shape, we find them to still be seriously limited in this regard. To quantify such limitations, we introduce IllusionBench, a dataset that challenges current cutting-edge VLMs to decipher shape information when the shape is represented by an arrangement of visual elements in a scene. Our extensive evaluations reveal that, while these shapes are easily detectable by human annotators, current VLMs struggle to recognize them, indicating important avenues for future work in developing more robust visual perception systems. The full dataset and codebase are available at: https://arshiahemmat.github.io/illusionbench/
Supplementary Material: zip
Submission Number: 1156
Loading