Track: Proceedings Track
Keywords: representational geometry, shape metrics, dissimilarity metrics, decodable information, representational similarity measures, linear regression
TL;DR: We theoretically relate some common measures of representational similarity to linear decoding.
Abstract: Neural responses encode information that is useful for a variety of downstream tasks. A common approach to understand these systems is to build regression models or “decoders” that reconstruct features of the stimulus from neural responses. Here, we investigate how to leverage this perspective to quantify the similarity of different neural systems. This is distinct from typical motivations behind neural network similarity measures like centered kernel alignment (CKA), canonical correlation analysis (CCA), and Procrustes shape distance, which highlight geometric intuition and invariances to orthogonal or affine transformations. We show that CKA, CCA, and other measures can be equivalently motivated from similarity in decoding patterns. Specifically, these measures quantify the average alignment between optimal linear readouts across a distribution of decoding tasks. We also show that the Procrustes shape distance upper bounds the distance between optimal linear readouts and that the converse holds for representations with low participation ratio. Overall, our work demonstrates a tight link between the geometry of neural representations and the ability to linearly decode information. This perspective suggests new ways of measuring similarity between neural systems and also provides novel, unifying interpretations of existing measures.
Submission Number: 57
Loading