Expressivity of Neural Networks with Fixed Weights and Learned Biases

Published: 16 Jun 2024, Last Modified: 10 Jul 2024HiLD at ICML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: random neural networks, recurrent neural networks, plasticity, deep learning, neuroscience, multi-task learning
Abstract: Landmark universal function approximation results for neural networks with trained weights and biases provided impetus for their ubiquitous use as learning models in Artificial Intelligence (AI) and neuroscience. Recent work has pushed the bounds of universal approximation by showing that arbitrary functions can similarly be learned merely by tuning smaller subsets of parameters of otherwise random networks, for example the output weights. Motivated by the fact that biases can be interpreted as biologically plausible mechanisms for adjusting unit outputs in neural networks, such as tonic inputs or activation thresholds, we investigate the expressivity of neural networks with random weights where only biases are optimized. We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can be trained to perform multiple tasks by learning biases only. We further show that an equivalent result holds for recurrent neural networks predicting dynamical system trajectories. Our results are relevant to neuroscience, where they demonstrate the potential for behaviourally-relevant changes in dynamics without modifying synaptic weights, as well as for AI, where they illuminate and generalize multi-task methods such as bias fine-tuning and network gating/masking and other non-parametric learning mechanisms.
Student Paper: Yes
Submission Number: 51
Loading