Keywords: Representational Alignment; Model Recovery; Deep Neural Networks; Similarity Judgments;Representational Geometry;Cognitve Computational modeling
Abstract: Linearly transforming stimulus representations of deep neural networks yields
high-performing models of behavioral and neural responses to complex stimuli.
But does the test accuracy of such predictions identify genuine representational
alignment? We addressed this question through a large-scale model-recovery study.
Twenty diverse vision models were linearly aligned to 4.5 million behavioral judg-
ments from the THINGS odd-one-out dataset and calibrated to reproduce human
response variability. For each model in turn, we sampled synthetic responses
from its probabilistic predictions, fitted all candidate models to the synthetic data,
and tested whether the data-generating model would re-emerge as the best predictor of the simulated data. Model recovery accuracy improved with training-set
size but plateaued below 80%, even at millions of simulated trials. Regression
analyses linked misidentification primarily to shifts in representational geometry
induced by the linear transformation, as well as to the effective dimensionality
of the transformed features. These findings demonstrate that, even with massive
behavioral data, overly flexible alignment metrics may fail to guide us toward artificial representations that are genuinely more human-aligned. Model comparison
experiments must be designed to balance the trade-off between predictive accuracy
and identifiability—ensuring that the best-fitting model is also the right one.
Primary Area: Neuroscience and cognitive science (e.g., neural coding, brain-computer interfaces)
Submission Number: 12839
Loading