Hybrid Regularization Methods Achieve Near-Optimal Regularization in Random Feature Models

TMLR Paper2790 Authors

03 Jun 2024 (modified: 11 Oct 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We demonstrate the potential of hybrid regularization methods to automatically and efficiently regularize the training of random feature models to generalize well on unseen data. Hybrid methods automatically combine the strengths of early stopping and weight decay while avoiding their respective weaknesses. By iteratively projecting the original learning problem onto a lower-dimensional subspace, they provide an efficient way to choose the weight decay hyperparameter. In our work, the weight decay hyperparameter is automatically selected by generalized cross-validation (GCV), which performs leave-one-out cross-validation simultaneously in a single training run and without the need for a dedicated validation dataset. As a demonstration, we use the random feature model to generate well- and ill-posed training problems arising from image classification. Our results show that hybrid regularization leads to near-optimal regularization in all problems. In particular, it is competitive with optimally tuned classical regularization methods. While hybrid regularization methods are popular in many large-scale inverse problems, their potential in machine learning is under-appreciated, and our findings motivate their wider use.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Previous submission was rejected due to non-anonymity. In this version, the funding information is removed for anonymity
Assigned Action Editor: ~Mathurin_Massias1
Submission Number: 2790
Loading