Multi-View Enhanced Graph Attention Network for Session-Based Music Recommendation

Published: 01 Jan 2024, Last Modified: 12 Feb 2025ACM Trans. Inf. Syst. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Traditional music recommender systems are mainly based on users’ interactions, which limit their performance. Particularly, various kinds of content information, such as metadata and description can be used to improve music recommendation. However, it remains to be addressed how to fully incorporate the rich auxiliary/side information and effectively deal with heterogeneity in it. In this paper, we propose a Multi-view Enhanced Graph Attention Network (named MEGAN) for session-based music recommendation. MEGAN can learn informative representations (embeddings) of music pieces and users from heterogeneous information based on graph neural network and attention mechanism. Specifically, the proposed approach MEGAN firstly models users’ listening behaviors and the textual content of music pieces with a Heterogeneous Music Graph (HMG). Then, a devised Graph Attention Network is used to learn the low-dimensional embedding of music pieces and users and by integrating various kinds of information, which is enhanced by multi-view from HMG in an adaptive and unified way. Finally, users’ hybrid preferences are learned from users’ listening behaviors and music pieces that satisfy users real-time requirements are recommended. Comprehensive experiments are conducted on two real-world datasets, and the results show that MEGAN achieves better performance than baselines, including several state-of-the-art recommendation methods.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview