Basic Categories in Vision Language Models: Expert Prompting Doesn't Grant Expertise

Published: 17 Sept 2025, Last Modified: 06 Nov 2025ACS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Vision Language Models, Artificial Intelligence, Basic-Level Effects, Expert Prompting
TL;DR: We analyse vision language models categorization behaviour in terms of its similarity to human categorization behaviour and its associated benefits in humans.
Abstract: The field of psychology has long recognized a basic level of categorization that humans use when labeling visual stimuli, a term coined by Rosch in 1976. This level of categorization has been found to be used most frequently, to have higher information density, and to aid in visual language tasks with priming in humans. Here, we investigate basic-level categorization in two recently released, open-source vision-language models (VLMs). This paper demonstrates that Llama 3.2 Vision Instruct (11B) and Molmo 7B-D both prefer basic-level categorization consistent with human behavior. Moreover, the models’ preferences are consistent with nuanced human behaviors like the biological versus non-biological basic level effects and the well-established expert basic level shift, further suggesting that VLMs acquire complex cognitive categorization behaviors from the human data on which they are trained. We also find our expert prompting methods demonstrate lower accuracy then our non-expert prompting methods, contradicting popular thought regarding the use of expertise prompting methods.
Paper Track: Technical paper
Submission Number: 31
Loading