Track: Extended Abstract Track
Keywords: Computational neuroscience, Brainscore, similarity metrics, neural alignment, neural network models of the brain, neural regressions methodology, NeuroAI
TL;DR: Maximizing neural regression scores does not teach us about the brain, but about how regression works
Abstract: A prominent methodology in computational neuroscience posits that the brain can be understood by identifying which artificial neural network models most accurately predict biological neural activations, measured according to regression test error or other similar metrics. In this opinion piece, we argue that the field lacks a canonical definition of model goodness, and rather than engaging with this difficult question, the neural regressions methodology simply asserted a proxy -- neural predictivity -- then overfit to this proxy. We begin with a notable failure of the neural regressions methodology in which the most predictive models disagreed with key properties of the neural circuit. Next, we highlight converging empirical and mathematical evidence that explains the disconnect: (linear) neural regressions are simply discovering the implicit biases of (linear) regression, which may not appropriately identify models that are actually brain-like. This is an instance of Goodhart's law: by selecting neural network models that optimize (linear) neural predictivity, the field's results have devolved into re-discovering general properties of (linear) regression, rather than furthering our understanding of the brain. These insights suggest that the neural regressions methodology may be insufficient for understanding the brain, and we call for a critical reevaluation of this methodology in computational neuroscience.
Submission Number: 31
Loading