Representation Convergence: Mutual Distillation is Secretly a Form of Regularization

12 Sept 2025 (modified: 26 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Online Reinforcement Learning, Generalization Theory, Representation Learning, Knowledge Distillation
TL;DR: A novel theoretical framework for formalizing generalization in reinforcement learning.
Abstract: In this paper, we argue that mutual distillation between reinforcement learning policies serves as an implicit regularization, preventing them from overfitting to irrelevant features. We highlight two separate contributions: (i) Theoretically, for the first time, we provide an end-to-end theoretical proof that enhancing the policy robustness to irrelevant features leads to improved generalization performance. (ii) Empirically, we demonstrate that mutual distillation between policies contributes to such robustness, enabling the spontaneous emergence of invariant representations over pixel inputs. Ultimately, we do not claim to achieve state-of-the-art performance but rather focus on uncovering the underlying principles of generalization and deepening our understanding of its mechanisms.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 4259
Loading