Tight Mutual Information Estimation With Contrastive Fenchel-Legendre OptimizationDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 16 Jan 2023, 04:27NeurIPS 2022 AcceptReaders: Everyone
Keywords: mutual information, variational inference, contrastive learning, few-shot learning, meta learning
TL;DR: We present a novel contrastive variational mutual information bound FLO that better balances the bias-variance trade-offs
Abstract: Successful applications of InfoNCE (Information Noise-Contrastive Estimation) and its variants have popularized the use of contrastive variational mutual information (MI) estimators in machine learning . While featuring superior stability, these estimators crucially depend on costly large-batch training, and they sacrifice bound tightness for variance reduction. To overcome these limitations, we revisit the mathematics of popular variational MI bounds from the lens of unnormalized statistical modeling and convex optimization. Our investigation yields a new unified theoretical framework encompassing popular variational MI bounds, and leads to a novel, simple, and powerful contrastive MI estimator we name FLO. Theoretically, we show that the FLO estimator is tight, and it converges under stochastic gradient descent. Empirically, the proposed FLO estimator overcomes the limitations of its predecessors and learns more efficiently. The utility of FLO is verified using extensive benchmarks, and we further inspire the community with novel applications in meta-learning. Our presentation underscores the foundational importance of variational MI estimation in data-efficient learning.
Supplementary Material: pdf
19 Replies

Loading