Source-Optimal Training is Transfer-Suboptimal

TMLR Paper6819 Authors

06 Jan 2026 (modified: 25 Jan 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We prove that training a source model optimally for its own task is generically suboptimal when the objective is downstream transfer. We study the source-side optimization problem in L2-SP ridge regression and show a fundamental mismatch between the source-optimal and transfer-optimal source regularization: outside of a measure-zero set, $\tau_0^* \neq \tau_S^*$. We characterize the transfer-optimal source penalty $\tau_0^*$ as a function of task alignment and identify an alignment-dependent reversal: with imperfect alignment ($0<\rho<1$), transfer benefits from stronger source regularization, while in super-aligned regimes ($\rho>1$), transfer benefits from weaker regularization. Additionally, in isotropic settings, the decision of whether transfer helps is independent of the target sample size and noise, depending only on task alignment and source characteristics. We verify the linear predictions in a synthetic ridge regression experiment, and we present experiments on MNIST, CIFAR-10, and 20 Newsgroups as evidence that the source-optimal versus transfer-optimal mismatch persists in standard nonlinear transfer learning pipelines.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Anurag_Arnab1
Submission Number: 6819
Loading