Abstract: Most graph-based explainable recommender systems employ paths within knowledge graphs to provide explanations for the recommended items. However, existing technologies often fail to provide detailed explanations for these paths, making it difficult to elaborate in a fine-grained manner on why a particular path is selected, specifically which relations along the path have played a crucial role. In this paper, we propose an explainable recommendation model with counterfactual path augmentation for reinforcement reasoning. Specifically, we propose a user preference learning method based on counterfactual path augmentation, which leverages counterfactual reasoning to learn the degree of trustworthiness that users assign to various candidate paths and even to the individual relations within the paths. Then we propose a dual-reward reinforcement learning approach for generating recommendations and explanations. This method integrates path-oriented rewards with item-oriented rewards to simultaneously enhance the accuracy and explainability of the model. Finally, we propose two novel evaluation metrics, namely stability and effectiveness, to evaluate explainability quality. We evaluate our model on four real-world datasets and the experimental results show the superiority of our model compared to state-of-the-art recommendation models.
External IDs:dblp:conf/dasfaa/KouJSZLNY25
Loading