Deep Variational Information BottleneckDownload PDF

02 Dec 2020 (modified: 22 Feb 2017)ICLR 2017 conference submissionReaders: Everyone
  • TL;DR: Applying the information bottleneck to deep networks using the variational lower bound and reparameterization trick.
  • Abstract: We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method “Deep Variational Information Bottleneck”, or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.
  • Keywords: Theory, Computer vision, Deep learning, Supervised Learning
  • Conflicts: google.com
17 Replies

Loading