Mitigating Dataset Bias by Using Per-sample GradientDownload PDF

Published: 21 Oct 2022, Last Modified: 20 Oct 2024NeurIPS 2022 Workshop DistShift PosterReaders: Everyone
Keywords: Dataset bias, Per-sample gradient, gradient norm
TL;DR: We propose an algorithm to mitigate dataset bias by adjusting the sampling probability in proportion to the computed per-sample gradient norm from the trained model.
Abstract: The performance of deep neural networks is strongly influenced by the training dataset setup. In particular, when attributes having a strong correlation with the target attribute are present, the trained model can provide unintended prejudgments and show significant inference errors (i.e., the dataset bias problem). Various methods have been proposed to mitigate dataset bias, and their emphasis is on weakly correlated samples, called bias-conflicting samples. These methods are based on explicit bias labels provided by human. However, such methods require human costs. Recently, several studies have tried to reduce human intervention by utilizing the output space values of neural networks, such as feature space, logits, loss, or accuracy. However, these output space values may be insufficient for the model to understand the bias attributes well. In this study, we propose a debiasing algorithm leveraging gradient called PGD (Per-sample Gradient-based Debiasing). PGD comprises three steps: (1) training a model on uniform batch sampling, (2) setting the importance of each sample in proportion to the norm of the sample gradient, and (3) training the model using importance-batch sampling, whose probability is obtained in step (2). Compared with existing baselines for various datasets, the proposed method showed state-of-the-art accuracy for the classification task.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 7 code implementations](https://www.catalyzex.com/paper/mitigating-dataset-bias-by-using-per-sample/code)
1 Reply

Loading