Implicit Bias of SignGD and Adam on Multiclass Separable Data

Published: 01 Jan 2025, Last Modified: 13 May 2025CoRR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In the optimization of overparameterized models, different gradient-based methods can achieve zero training error yet converge to distinctly different solutions inducing different generalization properties. Despite a decade of research on implicit optimization bias, important questions remain open even in the foundational case of linear classification with separable data. We address this gap by characterizing the implicit bias of both Adam and Sign gradient descent (SignGD) in multi-class cross-entropy minimization: we prove that their iterates converge to solutions maximizing the margin with respect to the classifier matrix's max-norm, and we establish the corresponding convergence rates. We then generalize our analysis to p-norm normalized steepest descent (NSD) algorithms. This includes Spectral Descent, which we show converges to the max-margin solution with respect to the spectral norm. A key insight is that the analysis of general entry-wise and Schatten p-norms can be reduced to the analysis of NSD with max-norm (i.e., SignGD) by exploiting a natural ordering property between all p-norms relative to the max-norm and its dual sum-norm. Our results demonstrate that the multi-class linear setting, which is inherently richer than the binary counterpart, provides the most transparent playground for studying implicit biases of matrix-parameter optimization algorithms.
Loading