Abstract: Explainable recommendation systems not only provide users with recommended results but also explain why they are recommended. Most existing explainable recommendation methods leverage sentiment analysis to help users understand reasons for recommendation results. They either convert particular preferences into sentiment scores or simply introduce the rating as the overall sentiment into the model. However, the simple rating information cannot provide users with more detailed reasons for recommendations in the explanation. To encode more sentiment information, some methods introduce user opinions into the explanations. As the opinion-based explainable recommendation system does not utilize supervision from sentiment, the generated explanations are generally limited to templates. To solve these issues, we propose a model called Sentiment-opinion Alignment Explainable Recommendation (SAER), which combines sentiment and opinion to ensure that the opinion in the explanation is consistent with the user’s sentiment to the product. Moreover, SAER provides informative explanations with diverse opinions for recommended items. Experiments on real datasets demonstrate that the proposed SAER model outperforms state-of-the-art explainable recommendation methods.
0 Replies
Loading