ConSim: Measuring Concept-Based Explanations' Effectiveness with Automated Simulatability

ACL ARR 2024 December Submission414 Authors

13 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Concept-based explanations work by mapping complex model computations to human-understandable concepts. Evaluating such explanations is very difficult, as it includes not only the quality of the induced *space of possible concepts* but also how effectively the chosen concepts are *communicated* to users. Existing evaluation metrics often focus solely on the former, neglecting the latter. We introduce an evaluation framework for measuring concept explanations via *automated simulatability*: a simulator's ability to predict the explained model's outputs based on the provided explanations. This approach accounts for both the concept space and its interpretation in an end-to-end evaluation. Human studies for simulatability are notoriously difficult to enact, particularly at the scale of a wide, comprehensive empirical evaluation (which is the subject of this work). We propose using large language models (LLMs) as simulators to approximate the evaluation and report various analyses to make such approximations reliable. Our method allows for scalable and consistent evaluation across various models and datasets. We report a comprehensive empirical evaluation using this framework and show that LLMs provide consistent rankings of explanation methods. Code available at [Anonymous GitHub](https://github.com/AnonymousConSim/ConSim).
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: hierarchical & concept explanations,explanation faithfulness
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 414
Loading