URL: A Representation Learning Benchmark for Transferable Uncertainty Estimates

Published: 26 Sept 2023, Last Modified: 03 Jan 2024NeurIPS 2023 Datasets and Benchmarks PosterEveryoneRevisionsBibTeX
Keywords: Representation Learning, Uncertainty, Zero-shot, Transfer, Generalization, Downstream, Benchmark
TL;DR: URL evaluates uncertainty estimates of large pretrained models on unseen downstream data, thereby extending representation learning benchmarks.
Abstract: Representation learning has significantly driven the field to develop pretrained models that can act as a valuable starting point when transferring to new datasets. With the rising demand for reliable machine learning and uncertainty quantification, there is a need for pretrained models that not only provide embeddings but also transferable uncertainty estimates. To guide the development of such models, we propose the Uncertainty-aware Representation Learning (URL) benchmark. Besides the transferability of the representations, it also measures the zero-shot transferability of the uncertainty estimate using a novel metric. We apply URL to evaluate ten uncertainty quantifiers that are pretrained on ImageNet and transferred to eight downstream datasets. We find that approaches that focus on the uncertainty of the representation itself or estimate the prediction risk directly outperform those that are based on the probabilities of upstream classes. Yet, achieving transferable uncertainty quantification remains an open challenge. Our findings indicate that it is not necessarily in conflict with traditional representation learning goals. Code is available at [https://github.com/mkirchhof/url](https://github.com/mkirchhof/url).
Submission Number: 184
Loading