Keywords: foundation models; concept bottleneck models; distribution shifts; concept-based explanations
TL;DR: The first study on the test-time adaptation of concept bottleneck for foundation models
Abstract: Advancements in foundation models (FMs) have led to a paradigm shift in machine
learning. The rich, expressive feature representations from these pre-trained, large-
scale FMs are leveraged for multiple downstream tasks, usually via lightweight
fine-tuning of a shallow fully-connected network following the representation.
However, the non-interpretable, black-box nature of this prediction pipeline can be
a challenge, especially in critical domains, such as healthcare, finance, and security.
In this paper, we explore the potential of Concept Bottleneck Models (CBMs)
for transforming complex, non-interpretable foundation models into interpretable
decision-making pipelines using high-level concept vectors. Specifically, we focus
on the test-time deployment of such an interpretable CBM pipeline “in the wild”,
where the distribution of inputs often shifts from the original training distribution.
We first identify the potential failure modes of such pipelines under different types
of distribution shifts. Then we propose an adaptive concept bottleneck framework
to address these failure modes, that dynamically adapts the concept-vector bank
and the prediction layer based solely on unlabeled data from the target domain,
without access to the source dataset. Empirical evaluations with various real-world
distribution shifts show our framework produces concept-based interpretations
better aligned with the test data and boosts post-deployment accuracy by up to
28%, aligning CBM performance with that of non-interpretable classification.
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10976
Loading