Dialogue Discourse Dependency Parsing from Pre-Trained and Fine-Tuned Language ModelsDownload PDF

Anonymous

16 Oct 2022 (modified: 05 May 2023)ACL ARR 2022 October Blind SubmissionReaders: Everyone
Keywords: Discourse analysis, dialogue, language models
Abstract: Discourse parsing suffers from data sparsity, especially for dialogues. As a result, we explore approaches to build naked discourse structures for dialogues, based on attention matrices from Pre-trained Language Models (PLMs). We investigate multiple auxiliary tasks for fine-tuning and show that the dialogue-tailored Sentence Ordering (SO) task performs best. For the crucial step of selecting the best attention head in PLMs, we propose unsupervised and semi-supervised methods. On the Strategic Conversation (STAC) corpus, we reach F1 scores of 57.2 for the unsupervised and 59.3 for the semi-supervised methods - SOTA for both settings. Restricting our evaluation to projective trees, scores improve to 63.3 and 68.1, respectively.
Paper Type: long
Research Area: Discourse and Pragmatics
0 Replies

Loading