Keywords: Emotional Support Conversation, Dialogue Policy Learning, Causal Inference
Abstract: While Large Language Models (LLMs) have significantly advanced the fluency of Emotional Support Conversation (ESC) systems, current research predominantly focuses on engineering increasingly complex architectures—from intricate reasoning chains to multi-agent collaborations. This trend results in opaque "black box" models that obscure the fundamental causal mechanisms between dialogue features and effective empathic strategies, leading to poor interpretability and susceptibility to distribution shifts in offline learning. To address these limitations, we propose a novel framework Causal-ESC. Departing from conventional paradigms that directly utilize raw dialogue history as input, our approach introduces Doubly Robust (DR) learning to explicitly model the causal effect of utterance features on strategy selection, effectively mitigating the biases and counterfactual unobservability inherent in offline datasets. We further integrate an LLM-based stylized rewriting mechanism to translate these rigorously learned causal strategies into natural, context-consistent responses. Comprehensive experiments, supported by statistical verification (e.g., Outcome $R^2$) and human-like evaluation, demonstrate that our framework not only significantly outperforms state-of-the-art baselines in empathy and helpfulness but also provides a theoretically grounded, interpretable solution to the "black box" dilemma in affective computing.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: task-oriented, grounded dialog, conversational modeling
Contribution Types: Model analysis & interpretability, Data analysis, Theory
Languages Studied: English
Submission Number: 5172
Loading