SAMBA: Regularized Autoencoders perform Sharpness-Aware MinimizationDownload PDF

Published: 20 Jun 2023, Last Modified: 18 Jul 2023AABI 2023Readers: Everyone
Keywords: variational autoencoder, sharpness-aware minimization, representation learning, variational inference, regularized autoencoder
TL;DR: Regularization of the decoder Jacobian's norm in autoencoders implicitly performs SAM, elucidating why such regularizers can smooth the latent space.
Abstract: Latent space smoothness is often associated with better sample quality in generative models. However, the theoretical understanding of smoothness-inducing regularizers, \eg, the gradient norm penalty on the decoder, is poorly understood. We leverage insights from variational inference and Sharpness-Aware Minimization (SAM) to connect gradient norm penalties to smoothness. We propose the deterministic SAM-Based Autoencoder (SAMBA) and show that its gradients are equivalent to the gradient-norm--penalized Regularized Autoencoder (RAE). We show experimentally on CIFAR10 that SAMBA has more means to induce smoothness than the RAE and has better smoothness properties than VAEs.
0 Replies

Loading