Contrastive Adapters for Foundation Model Group RobustnessDownload PDF

Published: 21 Jul 2022, Last Modified: 20 Oct 2024SCIS 2022 PosterReaders: Everyone
Keywords: foundation models, adapters, group robustness, large pretrained models
TL;DR: We find that zero-shot classification with foundation models may not be group-robust, and propose a simple adapter method to effectively and efficiently improve robustness.
Abstract: While large pretrained foundation models (FMs) have shown remarkable zero-shot classification robustness to dataset-level distribution shifts, their robustness to group shifts is relatively underexplored. We study this problem, and first find that popular FMs such as CLIP may not be robust to various group shifts. On prior robustness benchmarks, they achieve up to an 80.7 percentage point (pp) gap between average and worst-group accuracy. Unfortunately, current methods to improve robustness require retraining, which can be prohibitively expensive for large FMs. We find existing ways to efficiently improve large model inference, e.g., by training adapters (lightweight MLPs) on top of FM embeddings, can also hurt group robustness compared to zero-shot. We thus propose a first adapter training method designed to improve FM robustness to group shifts. While prior work only trains adapters with class labels, we add a contrastive objective to explicitly learn similar embeddings for initially dissimilar FM embeddings. Across the same benchmarks, contrastive adapting effectively and efficiently improves group robustness, raising worst-group accuracy by 16.0 to 56.0 pp over zero-shot without any FM finetuning. Beyond FM robustness, contrastive adapting achieves near-state-of-the-art robustness on Waterbirds and CelebA, while only training 1% of other methods' model parameters.
Confirmation: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/contrastive-adapters-for-foundation-model/code)
0 Replies

Loading