Rethinking Regularization with Random Label Smoothing

Published: 01 Jan 2024, Last Modified: 12 Nov 2024Neural Process. Lett. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Regularization helps to improve machine learning techniques by penalizing the models during training. Such approaches act in either the input, internal, or output layers. Regarding the latter, label smoothing is widely used to introduce noise in the label vector, making learning more challenging. This work proposes a new label regularization method, Random Label Smoothing, that attributes random values to the labels while preserving their semantics during training. The idea is to change the entire label into fixed arbitrary values. Results show improvements in image classification and super-resolution tasks, outperforming state-of-the-art techniques for such purposes.
Loading