Keywords: dialogues, explanations, conversation, interactive
TL;DR: We introduce TalkToModel, a system for open-ended natural language conversations for XAI purposes.
Abstract: Machine Learning (ML) models are increasingly used to make critical decisions in real-world applications, yet they have become more complex, making them harder to understand. To this end, researchers have proposed several techniques to explain model predictions. However, practitioners struggle to use these explainability techniques because they often do not know which one to choose and how to interpret the results of the explanations. In this work, we address these challenges by introducing TalkToModel: an interactive dialogue system for explaining machine learning models through conversations. TalkToModel comprises 1) a dialogue engine that adapts to any tabular model and dataset, understands language, and generates responses, and 2) an execution component that constructs the explanations. In real-world evaluations with humans, 73% of healthcare workers (e.g., doctors and nurses) agreed they would use TalkToModel over baseline point-and-click systems for explainability in a disease prediction task, and 85% of ML professionals agreed TalkToModel was easier to use for computing explanations. Our findings demonstrate that TalkToModel is more effective for model explainability than existing systems, introducing a new category of explainability tools for practitioners. We release code a demo for the \sys system at: \texttt{anonymized}.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2207.04154/code)
4 Replies
Loading