Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Flood, Overfitting, Regularization
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We present a novel flood regularizer that adapts the flood level of each training sample according to the difficulty of the sample.
Abstract: Although neural networks are conventionally optimized towards zero training loss, it has been recently learned that targeting a non-zero training loss threshold, referred to as a flood level, often enables better test time generalization.
Current approaches, however, apply the same constant flood level to all training samples, which inherently assumes all the samples have the same difficulty.
We present AdaFlood, a novel flood regularization method that adapts the flood level of each training sample according to the difficulty of the sample.
Intuitively, since training samples are not equal in difficulty, the target training loss should be conditioned on the instance.
Experiments on datasets covering four diverse input modalities — text, images, asynchronous event sequences, and tabular — demonstrate the versatility of AdaFlood across data domains and noise levels.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7123
Loading