Inferring DNN-Brain Alignment using Representational Similarity Analyses can be Problematic

ICLR 2024 Workshop Re-Align Submission12 Authors

Published: 02 Mar 2024, Last Modified: 02 May 2024ICLR 2024 Workshop Re-Align PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 9 pages)
Keywords: Representational Similarity Analysis; RSA; Deep Neural Networks; DNN;
TL;DR: We show a high RSA score between two visual systems that identify objects based on different visual features is not only possible in principle but also plausible in practice as demonstrated in two simulation studies
Abstract: Representational Similarity Analysis (RSA) has been used to compare representations across individuals, species, and computational models. Here we focus on comparisons made between the activity of hidden units in Deep Neural Networks (DNNs) trained to classify objects and neural activations in visual cortex. In this context, DNNs that obtain high RSA scores are often described as good models of biological vision, a conclusion at odds with the failure of DNNs to account for the results of most vision experiments reported in psychology. How can these two sets of findings be reconciled? Here, we demonstrate that high RSA scores can easily be obtained between two systems that classify objects in qualitatively different ways when second-order confounds are present in image datasets. We argue that these confounds likely exist in the datasets used in current and past research. If RSA is going to be used as a tool to study DNN-human alignment, it will be necessary to experimentally manipulate images in ways that remove these confounds. We hope our simulations motivate researchers to reexamine the conclusions they draw from past research and focus more on RSA studies that manipulate images in theoretically motivated ways.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 12
Loading