MMPOI: A Multi-Modal Content-Aware Framework for POI Recommendations

Published: 23 Jan 2024, Last Modified: 23 May 2024TheWebConf24EveryoneRevisionsBibTeX
Keywords: Multi-modal, POI recommendation, Recommender systems
Abstract: The Point-of-Interest (POI) recommendation system, designed to recommend potential future visits of users based on their check-in sequences, faces the challenge of data scarcity. This challenge primarily stems from the data sparsity issue, namely users interact with only a small number of POIs. Most existing studies attempt to solve this problem by focusing on POI check-in sequences, without considering the substantial multi-modal content information (e.g. textual and image data) commonly associated with POIs. In this paper, we propose a novel multi-modal content-aware framework for POI recommendation (MMPOI). Our approach addresses the issue of data sparsity by incorporating multi-modal content information about POIs from a new perspective. Specifically, MMPOI leverages pre-trained models for inter-modal conversion and employs a unified pre-trained model to extract modal-specific features from each modality, effectively bridging the semantic gap between different modalities. We propose to build a Multi-Modal Trajectory Flow Graph (MTFG) which combines the multi-modal semantic structure with check-in sequences. Moreover, we design an adaptive multi-task Transformer that models users' multi-modal movement patterns and integrates them for the next POI recommendation tasks. Extensive experiments on four real-world datasets demonstrate that MMPOI outperforms state-of-the-art POI recommendation methods. To facilitate reproducibility, we have released both the code and the multi-modal POI recommendation datasets we collect.
Track: User Modeling and Recommendation
Submission Guidelines Scope: Yes
Submission Guidelines Blind: Yes
Submission Guidelines Format: Yes
Submission Guidelines Limit: Yes
Submission Guidelines Authorship: Yes
Student Author: No
Submission Number: 734
Loading