A Novel Computational Modeling Foundation for Automatic Coherence Assessment

ACL ARR 2024 June Submission1241 Authors

14 Jun 2024 (modified: 17 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Coherence is an essential property of well-written texts, that refers to the way textual units relate to one another. In the era of generative AI, coherence assessment is essential for many NLP tasks; summarization, long-form question-answering, etc. Current NLP approaches for modeling coherence often rely on a proxy task, specifically sentence reordering. However, such an approach may not capture the full range of factors contributing to coherence. To bridge this gap, in this work we employ the formal linguistic definition of \citet{Reinhart:1980} of what makes a discourse coherent, consisting of three conditions --- {\em cohesion, consistency} and {\em relevance} -- and formalize these conditions as respective computational tasks. We hypothesize that (i) a model trained on all of these tasks will learn the features required for coherence detection, and that (ii) a joint model for all tasks will exceed the performance of models trained on each task individually. We evaluate this modeling approach on two human-rated coherence benchmarks: one of automatically-generated stories and one of real-world texts. Our experiments confirm that jointly training on the proposed tasks leads to better performance on each task compared with task-specific models, and to better performance on assessing coherence overall, compared with strong baselines. Our formal coherence framework paves the way for advanced, broad-coverage automatic assessment.
Paper Type: Long
Research Area: Discourse and Pragmatics
Research Area Keywords: coherence, cohesion, consistency, relevance, linguistic theory, nlp applications, language modeling, semantics
Languages Studied: English
Submission Number: 1241
Loading