I Understand How You Feel: Enhancing Deeper Emotional Support Through Multilingual Emotional Validation in Dialogue System
Abstract: Emotional validation - explicitly acknowledging that a user's feelings make sense - has proven therapeutic value but has received little computational attention.
We introduce the first three-stage framework for validation in dialogue systems, decomposing the problem into (i) validating-response identification, (ii) validation-timing detection, and (iii) validating-response generation.
To support research on all three subtasks we release M-EDESConv, a 120k English–Japanese multilingual corpus created through hybrid manual–automatic annotation, and M-TESC, a multilingual spoken-dialogue test set.
For timing detection, we propose MEGUMI, a Multilingual Emotion-aware Gated Unit for Mutual Integration, that fuses frozen XLM-RoBERTa semantics with language-specific emotion encoders via cross-modal attention and gated fusion. MEGUMI shows superior performance on both the M-EDESConv and M-TESC datasets.
Finally, we benchmark GPT-4.1 nano and Llama-3.1 8B on validating-response generation; few-shot prompting delivers the best balance between semantic fidelity, lexical diversity, and empathy-signal coverage, while chain-of-thought prompts increase diversity at the cost of precision.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: Dialogue and Interactive Systems, Multilingualism and Cross-Lingual NLP, Human-Centered NLP, Linguistic theories, Cognitive Modeling and Psycholinguistics
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Data resources
Languages Studied: English, Japanese
Submission Number: 7733
Loading