iFlood: A Stable and Effective RegularizerDownload PDF

29 Sept 2021, 00:31 (modified: 17 Feb 2022, 03:34)ICLR 2022 PosterReaders: Everyone
Keywords: overfitting, regularizer
Abstract: Various regularization methods have been designed to prevent overfitting of machine learning models. Among them, a surprisingly simple yet effective one, called Flooding, is proposed recently, which directly constrains the training loss on average to stay at a given level. However, our further studies uncover that the design of the loss function of Flooding can lead to a discrepancy between its objective and implementation, and cause the instability issue. To resolve these issues, in this paper, we propose a new regularizer, called individual Flood (denoted as iFlood). With instance-level constraints on training loss, iFlood encourages the trained models to better fit the under-fitted instances while suppressing the confidence on over-fitted ones. We theoretically show that the design of iFlood can be intrinsically connected with removing the noise or bias in training data, which makes it suitable for a variety of applications to improve the generalization performances of learned models. We also theoretically link iFlood to some other regularizers by comparing the inductive biases they introduce. Our experimental results on both image classification and language understanding tasks confirm that models learned with iFlood can stably converge to solutions with better generalization ability, and behave consistently at instance-level.
One-sentence Summary: We propose a novel regularizer named iFlood, which encourages the trained models to better fit the under-fitted instances while suppressing the confidence on over-fitted ones.
13 Replies

Loading