Codebook-Injected Dialogue Segmentation for Multi-Utterance Constructs Annotation: LLM-Assisted and Gold-Label-Free Evaluation

ACL ARR 2026 January Submission6429 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM Annotation, Dialogue Acts (DA), Segmentation, Evaluation, Tutoring Move, Education
Abstract: Dialogue Act (DA) annotation typically treats communicative or pedagogical intent as localized to individual utterances or turns. This leads annotators to agree on the underlying action while disagreeing on segment boundaries, reducing apparent reliability. We propose codebook-injected segmentation, which conditions boundary decisions on downstream annotation criteria, and evaluate LLM-based segmenters against standard and retrieval-augmented baselines. To assess these without gold labels, we introduce evaluation metrics for span consistency, distinctiveness, and human-AI distributional agreement. We found DA-awareness produces segments that are internally more consistent than text-only baselines. However, these gains often come at the cost of boundary sharpness or human-AI agreement. While LLMs excel at creating construct-consistent spans, coherence-based baselines remain superier at detecting global shifts in dialogue flow. Across two datasets, no single segmenter dominates: improvements in within-segment coherence frequently trade off against boundary distinctiveness and human–AI distributional agreement. These results highlight segmentation as a consequential design choice that should be optimized for downstream objectives rather than a single performance score.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: applications, dialogue state tracking, evaluation and metrics
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: Python
Submission Number: 6429
Loading