everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Explicit regularization and implicit bias are often studied separately, though in practice, they act in tandem. However, their interplay remains poorly understood. In this work, we show that explicit regularization modifies the behavior of implicit bias and provides a mechanism to control its strength. By incorporating explicit regularization into the mirror flow framework, we present a general approach to better understand implicit biases and their potential in guiding the design of optimization problems. Our primary theoretical contribution is the characterization of regularizations and reparameterizations that induce a time-dependent Bregman function, with a discussion of the implications of its temporal variation. Importantly, our framework encompasses single-layer attention, and application to sparse coding. Extending beyond our core assumptions, we apply this framework to LoRA finetuning, revealing an implicit bias towards sparsity.