Track: Extended Abstract Track
Keywords: Representational similarity analysis, symmetry, continual learning, stochastic gradient descent, neural manifolds
TL;DR: We demonstrate that symmetries in data manifolds may lead to drifting representational similiarity matrices over learning.
Abstract: What can representational similarity matrices tell us about a neural code? As the popularity of these summary statistics grows, so too does the need for a complete characterization of their properties. Here, we study how functionally-irrelevant degrees of freedom affect representational similarity matrices in perhaps the simplest nonlinear neural code: one with localized receptive fields tiling a symmetric manifold. Stimulus symmetries render many tilings functionally equivalent, but these configurations yield different similarity matrices provided that the tiling is sparse. We show that stochastic gradient descent or energetic regularization can generate sparse, drifting tilings, leading in turn to drifting similarity matrices. Our results illustrate the challenges inherent in comparing non-linear neural codes, when functionally-equivalent representations are not related by a simple rotation.
Submission Number: 41
Loading