Plex: Towards Reliability using Pretrained Large Model ExtensionsDownload PDF

26 May 2022 (modified: 22 Oct 2023)ICML 2022 Pre-training Workshop ContributedTalkReaders: Everyone
Keywords: reliability, large models, uncertainty, robustness, adaptation
TL;DR: We built tasks assessing reliability of models on both vision and language. We develop Plex as a large pretrained model to improve SOTA across tasks.
Abstract: A recent trend in artificial intelligence is the use of pretrained models for language and vision tasks, which have achieved extraordinary performance but also puzzling failures. Probing these models' abilities in diverse ways is therefore critical to the field. In this paper, we explore the reliability of models, where we define a reliable model as one that not only achieves strong predictive performance but also performs well consistently over many decision-making tasks involving uncertainty (e.g., selective prediction, open set recognition), robust generalization (e.g., accuracy and proper scoring rules such as log-likelihood on in- and out-of-distribution datasets), and adaptation (e.g., active learning, few-shot uncertainty). We devise 10 types of tasks over 38 datasets in order to evaluate different aspects of reliability on both vision and language domains. To improve reliability, we developed ViT-Plex and T5-Plex, pretrained large model extensions (plex) for vision and language modalities, respectively. Plex greatly improves the state-of-the-art across reliability tasks, and simplifies the traditional protocol as it does not require designing scores or tuning the model for each individual task. We demonstrate scaling effects over model sizes up to 1B parameters and pretraining dataset sizes up to 4B examples. We also demonstrate Plex's capabilities on challenging tasks including zero-shot open set recognition, active learning, and uncertainty in conversational language understanding.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](
0 Replies