An Analysis of Concept Bottleneck Models: Measuring, Understanding, and Mitigating the Impact of Noisy Annotations
Keywords: concept bottleneck models, label noise
TL;DR: Label noise in CBMs cripples prediction performance, interpretability, and interventions via a few susceptible concepts. We combat this with sharpness-aware training and entropy-based concept correction, restoring the robustness of CBMs.
Abstract: Concept bottleneck models (CBMs) ensure interpretability by decomposing predictions into human interpretable concepts.
Yet the annotations used for training CBMs that enable this transparency are often noisy, and the impact of such corruption is not well understood.
In this study, we present the first systematic study of noise in CBMs and show that even moderate corruption simultaneously impairs prediction performance, interpretability, and the intervention effectiveness.
Our analysis identifies a susceptible subset of concepts whose accuracy declines far more than the average gap between noisy and clean supervision and whose corruption accounts for most performance loss.
To mitigate this vulnerability we propose a two-stage framework.
During training, sharpness-aware minimization stabilizes the learning of noise-sensitive concepts.
During inference, where clean labels are unavailable, we rank concepts by predictive entropy and correct only the most uncertain ones, using uncertainty as a proxy for susceptibility.
Theoretical analysis and extensive ablations elucidate why sharpness-aware training confers robustness and why uncertainty reliably identifies susceptible concepts, providing a principled basis that preserves both interpretability and resilience in the presence of noise.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 6647
Loading