Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretable Machine Learning, Dictionary Learning, Representation Learning, Multimodal Models, Interpretability, CLIP
TL;DR: We use dictionary learning to interpret CLIP embeddings by representing them as sparse combinations of semantic concepts, resulting in interpretability while maintaining high performance and unlocking novel use cases.
Abstract: CLIP embeddings have demonstrated remarkable performance across a wide range of multimodal applications. However, these high-dimensional, dense vector representations are not easily interpretable, limiting our understanding of the rich structure of CLIP and its use in downstream applications that require transparency. In this work, we show that the semantic structure of CLIP's latent space can be leveraged to provide interpretability, allowing for the decomposition of representations into semantic concepts. We formulate this problem as one of sparse recovery and propose a novel method, Sparse Linear Concept Embeddings (SpLiCE), for transforming CLIP representations into sparse linear combinations of human-interpretable concepts. Distinct from previous work, \method is task-agnostic and can be used, without training, to explain and even replace traditional dense CLIP representations, maintaining high downstream performance while significantly improving their interpretability. We also demonstrate significant use cases of \method representations including detecting spurious correlations and model editing. Code is provided at https://github.com/AI4LIFE-GROUP/SpLiCE.
Primary Area: Interpretability and explainability
Submission Number: 18427
Loading