Abstract: Explainable AI (XAI) aims to provide insights into decisions made by deep neural networks.
To date, most XAI approaches provide only one-time, static explanations, which cannot cater to users' diverse knowledge levels and information needs.
Conversational explanations have been proposed as an effective method to customize XAI explanations. However, building conversational explanation systems is hindered by the scarcity of training data.
Training with synthetic data faces two main challenges: lack of data diversity and hallucination in the generated data. To alleviate these issues, we introduce a repetition penalty to promote data diversity and exploit a hallucination detector to filter out untruthful synthetic conversation turns. The proposed system, fEw-shot Multi-round ConvErsational Explanation (EMCEE), achieves relative improvements of 81.6\% in BLEU and 80.5\% in ROUGE compared to the baselines. EMCEE also mitigates the degeneration of data quality caused by training on synthetic data. In human evaluations, EMCEE outperforms baseline models in improving users' comprehension, acceptance, trust, and collaboration with static explanations by large margins. To the best of our knowledge, this is the first conversational explanation method that can answer arbitrary user questions that follow from static explanations.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: free-text/natural language explanations; conversational explanations
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 4586
Loading