Discursive Socratic Questioning: (Unsupervised) Interpreting Neural Language Models for Discourse UnderstandingDownload PDF

Anonymous

16 Oct 2022 (modified: 05 May 2023)ACL ARR 2022 October Blind SubmissionReaders: Everyone
Keywords: Discourse, Question answering, Interpretability
Abstract: Do neural language models (NLMs) understand the discourse they are processing? Traditional interpretation methods that address this question require pre-annotated explanations, which defeats the purpose of unsupervised explanation. We propose unsupervised Discursive Socratic Questioning (DiSQ), a two-step interpretative measure.DiSQ first generates Socratic-style questions about the discourse and then queries NLMs about these questions. A model’s understanding is measured by its responses to these questions. We apply DiSQ to examine two fundamental discourse phenomena, namely discourse relation and discourse coherence. We find NLMs demonstrate non-trivial capacities without being trained on any discourse data: Q&A pairs in DiSQ are shown to be evidence for discourse relation and cohesive devices for discourse coherence. DiSQ brings initial evidence that NLMs understand discourse through reasoning. We find larger models perform better, but contradictions and hallucinations are still problems. We recommend DiSQ as a universal diagnostic for discursive NLMs and using its output for self-supervision.
Paper Type: long
Research Area: Discourse and Pragmatics
0 Replies

Loading