Keywords: loss smoothing, sharpness aware minimization, flat minima, deep learning, activation decay
TL;DR: "activation decay," a computationally efficient method that improves generalization by flattening sharp minima through activation regularization
Abstract: Generalization in deep learning is often associated with the sharpness of the minima encountered during training. We introduce a novel, deterministic, and computationally efficient method called \emph{activation decay}, designed to flatten sharp minima and improve generalization across a wide range of tasks. Derived from Gaussian smoothing, activation decay operates by regularizing the activations of critical network layers, effectively reducing sharpness and improving robustness. Unlike stochastic techniques such as dropout or the more computationally expensive Sharpness-Aware Minimization (SAM), our approach requires no additional computational overhead, making it particularly suited for large-scale models.
We further demonstrate that activation decay can be seamlessly combined with other regularization techniques, offering enhanced regularization without increasing training complexity. Extensive experiments on CIFAR-10, ImageNet, and natural language processing (NLP) tasks validate our approach, showing consistent improvements in generalization and robustness to label noise.
Supplementary Material: zip
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9917
Loading