Abstract: This work develops a framework for post-training augmentation invariance, in which our goal is to add invariance properties to a pretrained network without altering its behavior on the original, non-augmented input distribution. We define this notion precisely and additionally introduce augmented encoders, which are probabilistic encoders that formalize augmentation-based encoding processes and that serve as our fundamental object of study. We introduce two losses for augmented encoders, namely, Markov-Wasserstein minimization and Wasserstein correlation maximization, and we demonstrate empirically that both losses can be used to train lightweight, one-hidden-layer MLP adapter networks $E_{\theta}$ that, when appended to the latent space of a pretrained network $F$, do indeed lead to (approximate) post-training augmentation invariance. For example, on STL10 with $F=\text{DINO}$ features, the composite network $C\circ E_{\theta}\circ F$, where $C$ is a linear classifier and where $E_{\theta}$ is one of our proposed adapter networks, achieves $94\%$ classification accuracy on arbitrarily rotated images, whereas a network of the form $C\circ F$ without the adapter $E_{\theta}$ drops to $71\%$ accuracy. Similarly, we can boost noise-invariant classification results from $58\%$ up to $86\%$. Significantly, we obtain these results with no fine-tuning (the weights of $F$ remain frozen throughout), and our methods introduce little corruption to the original features, since $E_{\theta}$ acts nearly isometrically on the non-augmented latent distribution. In contrast, we show that adapter networks trained with alternative candidate losses, specifically SimCLR and HSIC maximization, produce uncompetitive classification results and fundamentally corrupt the original latent space. Code available at \url{https://github.com/keenan-eikenberry/augmentation_invariance}.
Submission Type: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Small organizational changes (Markov-Wasserstein kernels first, then Wasserstein correlation in background section); consistent naming for models; small phrasing changes; typos
Code: https://github.com/keenan-eikenberry/augmentation_invariance
Assigned Action Editor: ~Pavel_Izmailov1
Submission Number: 6541
Loading