Deconfounded Noisy Labels LearningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Noisy labels learning, image classification, causal inference.
Abstract: Noisy labels are practical in real-world applications and cause severe performance degeneration. In this paper, first the validity of the small loss trick which plenty of noisy methods utilize is challenged. Then an empirical phenomenon named malignant bias is studied that results from the spurious correlation between noisy labels and background representation. To address this problem, unlike previous works based on statistical and regularization methods, we revisit the task from a causal perspective. A causal intervention model named deconfounded noisy labels learning (DeNLL) is applied to explicitly deconfound noisy label learning with causal adjustment, which eliminates the spurious correlation between labels and background representation and preserves true causal effect between labels and foreground representation. DeNLL implements the derived adjustment by a localization module (LM) and a debiased interaction module (DIM). LM adaptively discriminates foreground from background, and DIM dynamically encourages the interaction between the original representation and a debiased factor of the representation, which accords with the causal intervention. Experiments are carried out on five public noisy datasets including synthetic label noise, human label noise and real-world label noise. The proposed method achieves the state-of-the-art accuracy and exhibits clear improvements. Also, the proposed method is model-agnostic which improves the performances consistently on different backbones.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
TL;DR: Explicitly deconfound noisy label learning with causal adjustment, which eliminates the spurious correlation between labels and background representation and preserves true causal effect between labels and foreground representation.
Supplementary Material: zip
5 Replies

Loading