Cross-regularization: Adaptive Model Complexity through Validation Gradients

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Cross-regularization directly optimizes regularization parameters (e.g. parameter norms, noise, data augmentation) through gradient descent on validation data, eliminating hyperparameters.
Abstract: Model regularization requires extensive manual tuning to balance complexity against overfitting. Cross-regularization resolves this tradeoff by computing validation gradients that directly adapt regularization parameters during training. The method splits parameter optimization - training data guides feature learning while validation data shapes complexity controls - converging provably to cross-validation optima with computational cost scaling only in regularization dimension. When implemented through noise injection in neural networks, this approach reveals striking patterns: unexpectedly high noise tolerance and architecture-specific regularization that emerges organically during training. Beyond complexity control, the framework integrates seamlessly with data augmentation and uncertainty calibration while maintaining single-run efficiency through a simple gradient-based approach.
Lay Summary: Computer models learning effectively from data must balance grasping general patterns for new predictions against the pitfall of overfitting. Overfitting—learning training data too perfectly, including its noise and specific details—leads to poor performance on unseen information. To prevent this, scientists use "regularization" techniques, like constraining model complexity or introducing "noise" during training. Yet, achieving optimal generalization through these methods often requires extensive, inefficient manual expert tuning. Our research introduces "Cross-regularization," a method allowing models to automatically find this crucial balance. The model learns from one dataset and uses a separate generalization set to continuously fine-tune its complexity, guiding itself towards optimal generalization without manual intervention. This automated approach simplifies training and reveals how models can adapt complexity in unique ways, sometimes thriving with surprisingly high internal "noise." Ultimately, cross-regularization efficiently helps create robust, reliable AI systems that learn general principles, avoiding overfitting, and can adapt to growing data or provide more trustworthy predictions.
Primary Area: Deep Learning->Algorithms
Keywords: Regularization, Hyperparameter Optimization, Meta-Learning, Cross-Validation, Noise Injection, Deep Learning, Gradient-Based Optimization
Submission Number: 5339
Loading