Probing the Representational Geometry of Color Qualia: Dissociating Pure Perception from Task Demands in Brains and AI Models

NeurIPS 2025 Workshop NeurReps Submission102 Authors

30 Aug 2025 (modified: 29 Oct 2025)Submitted to NeurReps 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Representational Geometry, Symmetry, Computational Neuroscience, Color Qualia, Mechanistic Interpretability
TL;DR: The paper compares vision models to human brain activity during color perception, revealing that the models learn an abstract mathematical geometry of color that is absent in the brain's biological representations.
Abstract: Probing the computational underpinnings of subjective experience, or qualia, remains a central challenge in cognitive neuroscience. This project tackles this question by per- forming a rigorous comparison of the representational geometry of color qualia between state-of-the-art AI models and the human brain. Using a unique fMRI dataset with a ”no- report” paradigm, we use Representational Similarity Analysis (RSA) to compare diverse vision models against neural activity under two conditions: pure perception (“no-report”) and task-modulated perception (“report”). Our analysis yields three principal findings. First, nearly all models align better with neural representations of pure perception, sug- gesting that the cognitive processes involved in task execution are not captured by cur- rent feedforward architectures. Second, our analysis reveals a critical interaction between training paradigm and architecture, challenging the simple assumption that Contrastive Language-Image Pre-training(CLIP) training universally improves neural plausibility. In our direct comparison, this multi-modal training method enhanced brain-alignment for a vision transformer(ViT), yet had the opposite effect on a ConvNet. Our work contributes a new benchmark task for color qualia to the field, packaged in a Brain-Score compatible format. This benchmark reveals a fundamental divergence in the inductive biases of ar- tificial and biological vision systems, offering clear guidance for developing more neurally plausible models.
Submission Number: 102
Loading