Artificial Phantasia: Evidence for Propositional Reasoning-Based Mental Imagery in Large Language Models
Keywords: large language models, evaluation, cognitive science, mental imagery, reasoning, representational formats, iconic representations, aphantasia
TL;DR: We gave frontier LLMs novel items in a classic mental imagery task thought to be unsolvable relying solely on language, and yet, the best models outperformed humans providing a new way to evaluate complex emerging capacities.
Abstract: This study offers a novel approach for benchmarking complex cognitive behavior in artificial systems. Almost universally, Large Language Models (LLMs) perform best on tasks which may be included in their training data and can be accomplished solely using natural language, limiting our understanding of their emergent sophisticated cognitive capacities. In this work, we created dozens of novel items of a classic mental imagery task from cognitive psychology. The task consists of following a series of short instructions (3-5 steps), performing basic transformations on imagined letters and simple shapes to create a mental image of an object, and finally recognizing and labeling the object. Traditionally, cognitive psychologists have argued that this task is solvable exclusively via visual mental imagery (i.e., language alone would be insufficient). LLMs are perfect for testing this hypothesis. First, we tested several state-of-the-art LLMs by giving text-only models written instructions and asking them to report the resulting object after performing the transformations in the aforementioned task. Then, we created a baseline by testing 100 human subjects in exactly the same task. We found that the best LLMs performed significantly above average human performance (9.4\%-12.2\% increase over the human average of 54.7\%, $p<.00001$). Finally, we tested reasoning models set to different levels of reasoning and found the strongest performance when models allocate greater amounts of reasoning tokens. These results provide evidence that the best LLMs may have the capability to complete imagery-dependent tasks despite the non-pictorial nature of their architectures. Our study not only demonstrates an emergent cognitive capacity in LLMs while performing a novel task, but it also provides the field with a new task that leaves lots of room for improvement in otherwise already highly capable models. Finally, our findings reignite the debate over the formats of representation of visual imagery in humans, suggesting that propositional reasoning (or at least non-imagistic reasoning) may be sufficient to complete tasks that were long-thought to be imagery-dependent.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 21744
Loading