Learning to Reweight Samples with Offline Loss Sequence

Published: 01 Jan 2021, Last Modified: 04 Apr 2025ICDM 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks (DNNs) provide the best of class solutions to many supervised tasks due to their powerful function fitting capabilities. However, it is challenging to handle data bias, such as label noise and class imbalance, when applying DNNs to solve real-world problems. Sample reweighting is a popular strategy to tackle data bias, which assigns higher weights to informative samples or samples with clean labels. However, conventional reweighting methods require prior knowledge of the distribution information of data bias, which is intractable in practice. In recent years, meta-learning-based methods have been proposed to learn to assign weights to training samples adaptively by using their online training loss or gradient directions. However, the latent bias distribution cannot be adequately characterized in an online fashion. The online loss distribution changes over the training procedure, making it even harder to perform the sample weight learning. In contrast to past methods, we propose a two-stage training strategy to tackle the above problems. In the first stage, the loss sequences of samples are collected. In the second stage, a subnet with convolutional layers is utilized to learn the mapping from offline sample loss sequence to sample weight adaptively. Guided by a small unbiased meta dataset, this subnet is optimized iteratively with the main classifier network in a meta-learning manner. Empirical results show that our method, called Meta Reweighting with Offline Loss Sequence (MROLS), outperforms state-of-the-art reweighting techniques on most benchmarks. Moreover, the weights of training samples learned via MROLS can be well utilized by other classifiers, which can directly enhance the standard training schema. Our source code is available at https://github.com/Neronjust2017/MROLS.
Loading