MDR: Memory Distillation and Reproduction for Personalized Dialogue Generation

ACL ARR 2025 May Submission5999 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Personalized dialogue generation requires chatbots to generate dialogue content that meets users’ personality preferences and aligns with historical interactions. The long conversations pose difficulties for personalized and coherent responses, which becomes more challenging when most current systems generate responses by directly encoding features of various personalized information. To make better use of the correlation between encoded features and actual responses, in this paper the Memory Distillation and Reproduction (MDR) framework is proposed. For sentence feature encoding, we utilize the student encoder to align with and fit the response features encoded by the teacher encoder through knowledge distillation, enhancing the understanding of personality and complex contexts. For response generation, the decoding process is tailored to accommodate the contribution degree of response tokens. Therefore, MDR integrates users' historical dialogue and personalized knowledge to construct up-to-date user profiles. Extensive experiments are conducted on ConvAI2 and Baidu PersonaChat datasets, compared with 8 SOTA competitors through automatic evaluation. The results validate the superiority of MDR in terms of Coherence, Diversity and Consistency. Notably, MDR achieves BLEU-1 19.40 and Coh-Con.Score 37.14 on ConvAI2, and ROUGE-L 27.32 and S-Dist-2 92.08 on Baidu PersonaChat.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: conversational modeling, retrieval, applications
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English, Chinese
Keywords: conversational modeling, retrieval, applications
Submission Number: 5999
Loading