Abstract: In interactive Explainable Artificial Intelligence (XAI), researchers aim to offer explanations of model behavior to non-expert users in a natural, understandable way, e.g., via dialogues. We find that available XAI systems exhibit a lack of understanding the user and responding to them. This is because they do not consider context and often resemble question answering setups. Although computational argumentation and didactics have established interaction patterns for explanatory dialogues, a holistic dialogue management concept is missing. We contribute to conversational XAI in two ways: First, we present a concept for an explanatory dialogue management which is able to take context into account and easily adapt to user needs. Second, we underscore the importance of context by conducting a user study examining Large Language Model (LLM)-generated explanations based on dialogue context. Our study shows that responses based on those explanations outperform conventional template-based answers in terms of likeability. Finally, our ablation studies show that open-source models minimally attend to long contexts and instead rely heavily on the immediate history, but they can compete with GPT-4 on the task of XAI response generation.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: human-subject application-grounded evaluations, dialogue, free-text/natural language explanations, feature attribution, interpretability, evaluation and metrics, human evaluation, human-AI interaction, dialogue state tracking, conversational QA
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis, Position papers
Languages Studied: English
Submission Number: 1540
Loading