Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Deep Learning, Model Robustness, Domain Generalization, Common Corruption Robustness, Adversarial Robustness
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Reinterpreting label smoothing as a means to incorporate perturbation uncertainty.
Abstract: Model robustness is the ability of a machine learning model to perform well when confronted with unexpected distributional shifts during inference. While various augmentation-based methods exist to improve common corruption robustness, they often rely on predefined image operations, and the untapped potential of perturbation-based strategies still exists. In response to these limitations, we repurpose label smoothing as a tool for embedding the uncertainty of perturbations. By correlating confidence levels with a monotonically decreasing function to the intensity of isotropic perturbations, we demonstrate that the model eventually acquires the increased boundary thickness and flatter minima. These metrics have strong relationships with general model robustness, extending beyond the resistance to common corruption. Our evaluations on CIFAR-10/100, Tiny-ImageNet, and ImageNet benchmarks confirm that our approach not only bolsters robustness on its own but also complements existing augmentation strategies effectively. Notably, our method enhances both common corruption and adversarial robustness in all experimental cases, a feature not observed with prior augmentations.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4690
Loading