Input Space Mode Connectivity in Deep Neural Networks

Published: 10 Oct 2024, Last Modified: 12 Nov 2024SciForDL OralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We extend the concept of loss landscape mode connectivity to the input space of deep neural networks.
Abstract: We extend the concept of loss landscape mode connectivity to the input space of deep neural networks. Initially studied in parameter space, mode connectivity describes the existence of low-loss paths between solutions (loss minimizers) found via gradient descent. We present theoretical and empirical evidence of its presence in the input space of deep networks, thereby highlighting the broader nature of the phenomenon. We observe that different input images with similar predictions are generally connected, and for trained models, the path tends to be simple, with only a small deviation from being a linear path. We conjecture that input space mode connectivity in high-dimensional spaces is a geometric phenomenon, present even in untrained models, and can be explained by percolation theory. We exploit mode connectivity to obtain new insights about adversarial examples and show its potential for adversarial detection and interpretability.
Style Files: I have used the style files.
Submission Number: 84
Loading