Coherence of Argumentative Dialogue Snippets: A Large Scale Evaluation of Inference Anchoring Theory

ACL ARR 2025 February Submission2728 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

This paper describes a large scale experimental study (with 933 dialogue snippets and 87 annotators) addressing the research question 'Does Inference Anchoring Theory (IAT) model the structure of coherent debate?' IAT sets out the relation between dialogue structures (illocutionary acts, turns and their relations) and the inferential relations between the propositions that the interlocutors put forward in their debate with each other. IAT has been used for substantial corpus annotation and practical applications. To validate the structures that the theory assigns to debates, we designed an experiment for systematically comparing the coherence ratings for several variants of short debate snippets. The comparison is between original human-human debate snippets and algorithmically-generated variations that comply to different degrees with the structures mandated by IAT. In particular, we utilise an algorithm for producing alternatives of the original snippets which retain structure but change the content. We found that whereas the original debate snippets and their IAT-compliant variants receive high coherence ratings, snippets that violate IAT-mandated propositional relations received lower ratings (a difference that is statistically highly significant).

Paper Type: Long
Research Area: Discourse and Pragmatics
Research Area Keywords: coherence, discourse relations, dialogue
Contribution Types: Publicly available software and/or pre-trained models, Data resources, Data analysis, Theory
Languages Studied: English
Submission Number: 2728
Loading