Coherence Evaluation of Visual Concepts With Objects and LanguageDownload PDF

Published: 25 Mar 2022, Last Modified: 05 May 2023ICLR2022 OSC PosterReaders: Everyone
Keywords: Visual Concepts, Interpretability, Objects, Vision-Language modeling, Conceptual Explanations
TL;DR: We use objects and language as forms of supervision to evaluate the quality of unsupervisedly-discovered visual concepts.
Abstract: Meaningful concepts are the fundamental elements of human reasoning. In explainable AI, they are used to provide concept-based explanations of machine learning models. The concepts are often extracted from large-scale image data sets in an unsupervised manner and are therefore not guaranteed to be meaningful to users. In this work, we investigate to which extent we can automatically assess the meaningfulness of such visual concepts using objects and language as forms of supervision. On the way towards discovering more interpretable concepts, we propose the “Semantic-level, Object and Language-Guided Coherence Evaluation” framework for visual concepts (SOLaCE). SOLaCE assigns semantic meanings in the form of words to automatically discovered visual concepts and evaluates their degree of meaningfulness on this higher level without human effort. We consider the question of whether objects are sufficient as possible meanings, or whether a broader vocabulary including more abstract meanings needs to be considered. By means of a user study, we confirm that our simulated evaluations highly agree with the human perception of coherence. We publicly release our data set containing 2600 human ratings of visual concepts.
3 Replies

Loading