Learning Minimal Representations with Model InvarianceDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Representation Learning, Minimal Representations, Reinforcement Learning, Self-supervised Learning
Abstract: Sparsity has been identified as an important characteristic in learning neural networks that generalize well, forming the key idea in constructing minimal representations. Minimal representations are ones that only encode information required to predict well on a task and nothing more. In this paper we present a powerful approach to learning minimal representations. Our method, called ModInv or model invariance, argues for learning using multiple predictors and a single representation, creating a bottleneck architecture. Predictors' learning landscapes are diversified by training independently and with different learning rates. The common representation acts as a implicit invariance objective to avoid the different spurious correlations captured by individual predictors. This in turn leads to better generalization performance. ModInv is tested on both the Reinforcement Learning and the Self-supervised Learning settings, showcasing strong performance boosts in both. It is extremely simple to implement, does not lead to any delay in walk clock times while training, and can be applied across different problem settings.
One-sentence Summary: A practical method for learning minimal-sufficient representations, with applications to both RL and Vision.
12 Replies

Loading