Hierarchical Concept Discovery Models: A Concept Pyramid Scheme

19 Sept 2023 (modified: 30 Jan 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Interpretability, Explainability, Concept Bottleneck, Sparsity, Multimodal Models, Concepts, Textual Descriptions, Bayesian, Mask
TL;DR: We propose a novel hierarchical construction for multi-level concept discovery; we do not solely rely on the similarity between concepts and the whole image, but we also consider granular information residing in patch-specific regions of the image.
Abstract: Deep Learning algorithms have recently gained significant attention due to their impressive performance. However, their high complexity and un-interpretable mode of operation hinders their confident deployment in real-world safety-critical tasks. This work targets *ante hoc* interpretability, and specifically Concept Bottleneck Models (CBMs). Our goal is to design a framework that admits a highly interpretable decision making process with respect to human understandable concepts, on *multiple levels of granularity. To this end, we propose a novel hierarchical concept discovery formulation leveraging: (i) recent advances in image-text models, and (ii) an innovative formulation for *multi-level concept selection* via data-driven and sparsity inducing Bayesian arguments. Within this framework, concept information does not solely rely on the similarity between the *whole* image and general *unstructured* concepts; instead, we introduce the notion of *concept hierarchy* to uncover and exploit more granular concept information residing in *patch-specific* regions of the image scene. As we experimentally show, the proposed construction not only outperforms recent CBM approaches, but also yields a principled framework towards interpetability.
Supplementary Material: zip
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1823
Loading