Keywords: Explainable Recommendation, Temporal Evolution, Causal Representation Learning, Chain-of-Thought
Abstract: Explainable recommendation has gained increasing attention for its ability to build user trust through transparent and meaningful justifications. However, real-world user preferences and item attributes are inherently dynamic, yet most existing methods rely on static historical interactions, often producing outdated recommendations and implausible explanations. In this paper, we propose DyCEX (Dynamic Causal Explanation), a novel framework for generating causally grounded, temporally aware, and cognitively plausible explanations in dynamic recommendation scenarios. Specifically, we first design a causality-guided representation learner that models the temporal evolution of users and items through inferred cause-effect relationships, effectively filtering out obsolete signals to better reflect present-day interests. Second, we employ a dual-path gated fusion strategy that distinguishes stable thematic affinities from transient stylistic trends by adaptively reweighting features across time, yielding more coherent user and item representations. Third, we leverage a Large Language Model (LLM) guided by Chain-of-Thought (CoT) prompting to generate step-by-step natural language explanations that logically connect current user needs with relevant item attributes. Extensive experiments on three real-world datasets demonstrate that DyCEX significantly outperforms state-of-the-art baselines in explanation quality.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Data Influence, Explanation Faithfulness, free-text/natural language explanations, hierarchical & concept explanations
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 7026
Loading