Uncertainty-Aware Label Regularisation Driven by Class Embedding with Attention Mechanism

Published: 2025, Last Modified: 15 Jan 2026ICIC (18) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Due to the fact that neural networks may overfit noisy labels, label smoothing has been designed as a kind of regularisation techniques, which aims at softening a hard label by transforming the one-hot vector of each class into a dense one, i.e., reducing the probability of the correct class to a value less than 1 and equalizing the probability values of the incorrect classes. However, considering the possibility that the correct class may relate to each of the other classes in a varied degree, it may not always be appropriate to equalize the probability values of the incorrect classes. To exploit the aforementioned inter-class relationships in softening labels, in this paper, we propose an uncertainty-aware label regularisation technique, which is driven by class embedding with an attention mechanism. Specifically, we design a class embedding approach that involves an attention mechanism for weighting training instances in the setting of adaptive boosting with data augmentation, in order to measure the inter-class similarity scores and to exploit the scores for learning a soft label for each instance. The experimental results on 9 public data sets indicate that the proposed label regularisation technique outperforms 6 baseline methods on 6 data sets and performs marginally worse than two of the baseline methods on one data set. The results also show that the classification accuracy produced by the proposed approach is consistently over 82% on the 9 data sets whereas each of those baseline methods demonstrates an accuracy score below 80% on at least one data set.
Loading