Robustness via Probabilistic Cross-Task EnsemblesDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Robustness, distribution shift, ensembling, uncertainty estimation
Abstract: We present a method for making predictions using neural networks that, at the test time, is robust against shifts from the training data distribution. The proposed method is based on making \emph{one prediction via different cues} (called middle domains) and ensembling their outputs into one strong prediction. The premise of the idea is that predictions via different cues respond differently to distribution shifts, hence one can merge them into one robust final prediction, if ensembling can be done successfully. We perform the ensembling in a straightforward but principled probabilistic manner. The evaluations are performed using multiple vision dataset under a range of natural and synthetic distribution shifts which demonstrate the proposed method is considerably more robust compared to its standard learning counterpart, conventional ensembles, and several other baselines.
One-sentence Summary: A method for making robust predictions using an ensemble of cross-task consistent models.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=sA3Pc-y9Wn
10 Replies

Loading