A Deep Generative XAI Framework for Natural Language Inference Explanations GenerationDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Explainable artificial intelligence with natural language explanations (Natural-XAI) aims to produce human-readable explanations as evidence for AI decision-making. This evidence can enhance human trust and understanding of AI systems and contribute to AI explainability and transparency. However, the current approaches focus on single explanation generation only. In this paper, we conduct experiments with the state-of-the-art Transformer architecture and explore \textit{multiple explanations generation} using a public benchmark dataset, e-SNLI \cite{camburu2018snli}. We propose a novel deep generative Natural-XAI framework: \textbf{INITIATIVE}, standing for \textit{expla\underline{\textbf{I}}n a\underline{\textbf{N}}d pred\underline{\textbf{I}}c\underline{\textbf{T}} w\underline{\textbf{I}}th contextu\underline{\textbf{A}}l condi\underline{\textbf{TI}}onal \underline{\textbf{V}}ariational auto\underline{\textbf{E}}ncoder} for generating natural language explanations and making a prediction at the same time. Our method achieves competitive or better performance against the state-of-the-art baseline models on generation (4.7\% improvement in the BLEU score) and prediction (4.4\% improvement in accuracy) tasks. Our work can serve as a solid deep generative model baseline for future Natural-XAI research. Our code will be publicly available on GitHub upon paper acceptance.
0 Replies

Loading