A theory of parameter identifiability in data-constrained recurrent neural networks

18 Sept 2025 (modified: 05 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: computational neuroscience, recurrent neural networks, identifiability
TL;DR: A theory of parameter identifiability in data-constrained recurrent neural networks
Abstract: Researchers routinely study the neural algorithms of the brain by training data-constrained recurrent neural networks (dRNNs) to reproduce observed neural activity. However, whether the biological insights gained from these overparameterized dRNNs are actionable remains underexplored. In particular, it is unclear which dRNN parameters are constrained by a given training set of neural trajectories. To bridge this gap, we focus on a simplified but experimentally relevant setting of dRNN training, characterize the identifiable parameter subspaces there, and report five key findings: (i) dRNNs contain vast unconstrained parameter regions due to intrinsically low-dimensional training data; (ii) existing training methods can mistakenly attribute importance to non-identifiable parameters; (iii) a generalized blueprint explains the ability of practical estimators to operate exclusively within identifiable parameter subspaces; (iv) despite parameter non-identifiability, activity subspaces with preserved dynamics exist across all trained dRNNs; and (v) targeted intervention experiments can optimally expand the identifiable parameter subspaces. Our results establish practical guidelines to overcome parameter non-identifiability issues when training dRNN models in systems neuroscience.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 10663
Loading