Keywords: concept bottleneck model; test-time adaptation; distribution shifts; interpretability
TL;DR: Concept bottleneck models for foundation models and their adaptability to distribution shifts
Abstract: Advancements in foundation models have led to a paradigm shift in deep learning pipelines.
The rich, expressive feature representations from these pre-trained, large-scale backbones are leveraged for multiple downstream tasks, usually via light-weight fine-tuning of a shallow fully-connected network following the representation.
However, the non-interpretable, black-box nature of this prediction pipeline can be a challenge, especially in critical domains such as healthcare.
In this paper, we explore the potential of Concept Bottleneck Models (CBMs) for transforming complex, non-interpretable foundation models into interpretable decision-making pipelines using high-level concept vectors.
Specifically, we focus on the test-time deployment of such an interpretable CBM pipeline ``in the wild'', where the distribution of inputs often shifts from the original training distribution.
We propose a \textit{light-weight adaptive CBM} that makes dynamic adjustments to the concept-vector bank and prediction layer(s) based solely on unlabeled data from the target domain, without access to the source dataset.
We evaluate this test-time CBM adaptation framework empirically
on various distribution shifts and produce concept-based interpretations better aligned with the test inputs, while also providing a strong average test-accuracy improvement of 15.15\%, making its performance on par with that of non-interpretable classification with foundation models.
Submission Number: 123
Loading