everyone
since 09 May 2025">EveryoneRevisionsBibTeXCC BY 4.0
This paper describes a large scale experimental study (with 933 dialogue snippets and 87 annotators) addressing the research question 'Does Inference Anchoring Theory (IAT) model the structure of coherent debate?' IAT sets out the relation between dialogue structures (illocutionary acts, turns and their relations) and the inferential relations between the propositions that the interlocutors put forward in their debate with each other. IAT has been used for substantial corpus annotation and practical applications. To validate the structures that the theory assigns to debates, we designed an experiment for systematically comparing the coherence ratings for several variants of short debate snippets. The comparison is between original human-human debate snippets and algorithmically-generated variations that comply to different degrees with the structures mandated by IAT. In particular, we utilise an algorithm for producing alternatives of the original snippets which retain structure but change the content. We found that whereas the original debate snippets and their IAT-compliant variants receive high coherence ratings, snippets that violate IAT-mandated propositional relations received lower ratings (a difference that is statistically highly significant).