Abstract: Which components in transformer language models are responsible for discourse understanding? We hypothesize that sparse computational graphs, termed as discursive circuits control how models process discourse relations. Unlike simpler tasks, discourse relations involve longer spans and complex reasoning. To make circuit discovery feasible, we introduce a task called Completion under Discourse Relation (CUDR), where a model completes a discourse given a specified relation. To support this task, we construct a corpus of minimal contrastive pairs tailored for activation patching in circuit discovery. Experiments show that sparse circuits (≈0.2% of a full GPT-2 model) recover discourse understanding in the English PDTB-based CUDR task.
These circuits generalize well to unseen discourse frameworks such as RST and SDRT. Further analysis shows lower layers capture linguistic features such as lexical semantics and coreference, while upper layers encode discourse-level abstractions. Feature utility is consistent across frameworks (e.g., coreference supports Expansion-like relations).
Paper Type: Long
Research Area: Discourse and Pragmatics
Research Area Keywords: Discourse Relation, Mechanistic Interpretability
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Keywords: Discourse Relation, Mechanistic Interpretability
Submission Number: 6261
Loading