Supplementary Material: zip
Keywords: linear mode-connectivity, losslandscape
TL;DR: We systematically analyze how components of modern deep learning pipelines, optimization, architecture, and dataset break linear mode connectivity starting from a robust 2-Layer MLP.
Abstract: Linear mode-connectivity (LMC) (or lack thereof) is one of the intriguing characteristics of neural network loss landscapes. While empirically well established, it unfortunately still lacks a proper theoretical understanding. Even worse, although empirical data points are abound, a systematic study of when networks exhibit LMC is largely missing in the literature. In this work we aim to close this gap. We explore how LMC is affected by three factors: (1) architecture (sparsity, weight-sharing), (2) training strategy (optimization setup) as well as (3) the underlying dataset. We place particular emphasis on minimal but non-trivial settings, removing as much unnecessary complexity as possible. We believe that our insights can guide future theoretical works on uncovering the inner workings of LMC.
Track: Extended Abstract Track
Submission Number: 84
Loading