Locality-aware Concept Bottleneck Model

Published: 10 Oct 2024, Last Modified: 07 Nov 2024UniRepsEveryoneRevisionsBibTeXCC BY 4.0
Track: Extended Abstract Track
Keywords: interpretability, explainability, concept bottleneck model
Abstract: Concept bottleneck models (CBMs) are inherently interpretable models that make predictions based on human-understandable visual cues, referred to as concepts. As obtaining dense concept annotations with human labeling is demanding and costly, recent approaches utilize foundation models to determine the concepts existing in the image. However, such \textit{label-free} CBMs often fail to attend to concepts that are important predictive but only exist in a small region of the image (e.g., a beak of a bird), making their decision-making less aligned with human reasoning. In this paper, we propose a novel framework, coined Locality-aware CBM (LCBM), which divides an image into smaller patches. Specifically, we use their similarity with concepts to ensure that the concept prediction of the model adheres to the relevant region and effectively captures important local concepts existing in the small region of the image. Experimental results demonstrate that LCBM accurately identifies important concepts from images and exhibits improved localization capability while maintaining high classification performance.
Submission Number: 23
Loading