Towards Robust Out-of-Distribution Generalization for Deep Neural Networks with Tailored Data Regularization

ICLR 2026 Conference Submission21851 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: deep neural networks, noise injection, out-of-distribution, regularization, data augmentation
Abstract: Out-of-Distribution (OOD) generalization remains both a fundamental challenge and an often-overlooked aspect of modern machine learning—especially in the context of Deep Neural Networks (DNNs), which are highly expressive yet prone to overfitting under distributional stress. Classical learning theory highlights the role of regularization in managing the bias-variance trade-off—particularly important for complex models with higher VC dimension. In this work, we explore stochastic data regularization techniques—such as random transformations and noise injection—applied not only as isolated strategies but also organized through a Scheduling Policy framework using a Curriculum Learning-based approach. By progressively increasing input difficulty during training, the scheduling aligns model capacity with task complexity, promoting more robust generalization. We also propose a novel statistical procedure to assess the consistency of performance estimates across cross-validation folds, mitigating miscoverage effects in confidence interval estimation. Altogether, our findings highlight the importance of a tailored data regularization, where the selection, combination, and scheduling of perturbations become key to achieving OOD robustness in DNNs.
Primary Area: learning theory
Submission Number: 21851
Loading