ECG-LoRA: A Conditional Fidelity Gated Adaptation Method for Empathetic Response Generation

ACL ARR 2026 January Submission9559 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: empathetic dialogue, emotional fidelity, controllable generation, Parameter-Efficient Fine-Tuning, LoRA adaptation, gating mechanism, large language models, dialogue systems, human-like response imitation, emotion conditioning
Abstract: Although Large Language Models (LLMs) excel in open-domain dialogue, they often struggle with emotional fidelity--the ability to dynamically adapt response styles and strategies to fine-grained emotional contexts. Existing Parameter-Effcient Fine-Tuning methods like LoRA typically apply static updates, failing to capture such nuanced variations. To address this, we propose Emotion-Conditioned Gated LoRA (ECG-LoRA), a novel framework that introduces a lightweight, input-aware gating mechanism to dynamically scale LoRA updates based on emotional signals. This design enables the model to intrinsically allocate adaptation capacity: intensifying intervention for high-arousal emotions while preserving base knowledge for subtle states. Extensive experiments on the EmpatheticDialogues benchmark across three LLM backbones (Qwen2-7B, Llama-3-8B, Gemma-7B) demonstrate that ECG-LoRA significantly outperforms standard LoRA in both generation quality (e.g., +0.65% BERTScore) and emotional fidelity (e.g.,+4.29% Emp-F1). Our framework is highly efficient, requiring only 0.12% additional parameters. More importantly, it introduces a new perspective for controllable text generation: targeting the imitationof complex, intrinsically consistent human behavioral patterns beyond simple semantic alignment. Code and data are released to facilitate future research.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: empathetic response generation, controllable text generation, emotion-conditioned generation, Parameter-Efficient Fine-Tuning, LoRA, gating mechanisms, dialogue systems, human-like response imitation, emotional alignment, large language models
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 9559
Loading