Open Vocabulary Compositional Explanations for Neuron Alignment

27 Mar 2026 (modified: 28 Apr 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Compositional explanations leverage logical relationships between concepts to express the spatial alignment between neuron activations and human knowledge. However, these explanations rely on human-annotated datasets, restricting their applicability to specific domains and predefined concepts. This paper addresses this limitation by introducing a framework for the vision domain that allows users to probe neurons for arbitrary concepts and datasets. Specifically, the framework leverages open vocabulary semantic segmentation models to compute open vocabulary compositional explanations. The proposed framework consists of three steps: identifying concept sets, generating semantic segmentation masks using open vocabulary models, and deriving compositional explanations from these masks. The paper also proposes a process that leverages semantic knowledge graphs to analyze and compare compositional explanations computed by different methods sharing the same setup. The paper compares the proposed framework with previous methods for computing compositional explanations both in terms of quantitative metrics and human interpretability, analyzes the differences in explanations when shifting from human-annotated data to model-annotated data, and showcases the additional capabilities provided by the framework in terms of flexibility of the explanations with respect to the tasks and properties of interest.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Martha_Lewis1
Submission Number: 8142
Loading