A Theory of Initialisation's Impact on Specialisation

Published: 11 Oct 2024, Last Modified: 10 Nov 2024M3L PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine learning theory, statistical physics, specialisation, continual learning, disentangled representation learning
TL;DR: We study the impact of initialisation on the representation learnt by neural networks in synthetic settings.
Abstract: Prior work has demonstrated a consistent tendency in neural networks engaged in continual learning tasks, wherein intermediate task similarity results in the highest levels of catastrophic interference with prior learning. This phenomenon is attributed to the network's tendency to reuse learned features across tasks. However, this explanation heavily relies on the condition that such a neuron specialisation occurs, i.e. the emergence of localised representations. Our investigation challenges the validity of this assumption. Using theoretical frameworks for the analysis of neural networks, we show a strong dependence of specialisation on the initial condition. More precisely, we show that weight imbalance and high weight entropy can favour specialised solutions. We then apply these insights in the context of continual learning, first showing the emergence of a monotonic relation between task-similarity and forgetting in non-specialised networks, and, finally, assessing the implications on the commonly employed elastic weight consolidation regularisation technique.
Is Neurips Submission: No
Submission Number: 56
Loading