Class-Context-Aware Phantom Uncertainty Modeling

15 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: uncertainty modeling, probabilistic representations, variation inference, robust learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We model the uncertainty of the artificial "phantoms" to mitigate uncertainty underestimation so as to improve model robustness.
Abstract: Uncertainty modeling is crucial in developing robust and reliable models since it enables decision-makers to access the trustworthiness of predictions and make informed choices based on the uncertainty associated with the prediction. A straightforward approach to endow models with the ability to estimate uncertainty involves modeling a probabilistic distribution of the input representations and approximating it by variational inference. However, this method inevitably leads to an issue where uncertainty is underestimated, resulting in overconfident predictions even when dealing with data that contains inherent noise or ambiguity. In response to this challenge, we introduce a novel approach called Class-Context-aware Phantom Uncertainty Modeling. To circumvent the problem of underestimating uncertainty associated with the input data, we shift the focus to infer the distribution of their respective phantoms, which are derived by leveraging class-contextual information. We mitigate the issue of uncertainty underestimation by demonstrating that the estimated uncertainty of the original input data is no less than that of the phantom. We showcase our method's superior robustness and generalization capabilities through experiments involving robust learning tasks such as noisy label learning and cross-domain generalization.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 469
Loading