Mirror, Mirror of the Flow: How Does Regularization Shape Implicit Bias?

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: This work studies how explicit regularization influences implicit bias in overparameterized models, with applications spanning various problem domains, including sparse coding, matrix sensing, single-layer attention and LoRA.
Abstract: Implicit bias plays an important role in explaining how overparameterized models generalize well. Explicit regularization like weight decay is often employed in addition to prevent overfitting. While both concepts have been studied separately, in practice, they often act in tandem. Understanding their interplay is key to controlling the shape and strength of implicit bias, as it can be modified by explicit regularization. To this end, we incorporate explicit regularization into the mirror flow framework and analyze its lasting effects on the geometry of the training dynamics, covering three distinct effects: positional bias, type of bias, and range shrinking. Our analytical approach encompasses a broad class of problems, including sparse coding, matrix sensing, single-layer attention, and LoRA, for which we demonstrate the utility of our insights. To exploit the lasting effect of regularization and highlight the potential benefit of dynamic weight decay schedules, we propose to switch off weight decay during training, which can improve generalization, as we demonstrate in experiments.
Lay Summary: This paper explores how explicit regularization (like weight decay) influences the "implicit bias" of machine learning models, which refers to the natural tendency of models to favor certain solutions even without direct constraints. We extend a mathematical framework called mirror flow to include regularization and we show that it can reshape the optimization landscape in three key ways: changing the kind of solutions a model prefers (type of bias), shifting where it tends to focus in the parameter space (positional bias), and narrowing the range of solutions it can settle on (range shrinking). We find that turning off regularization partway through training can help models generalize better, this is validated with experiments in areas like matrix sensing, vision, and fine-tuning language models. The paper offers new insights into how we might control a model's learning behavior more precisely using regularization.
Primary Area: Theory->Deep Learning
Keywords: Implicit bias, explicit regularization, weight decay, matrix sensing, LoRA, attention, mirror flow, time-dependent Legendre function
Submission Number: 1053
Loading