NextLocMoE: Enhancing Next Location Prediction via Location-Semantics Mixture-of-Experts and Personalized Mixture-of-Experts
Keywords: next location prediction, Mixture-of-Experts, Large Language Model, Location Function MoE, Persona MoE
TL;DR: We propose NextLocMoE, a Mixture-of-Experts LLM framework for next-location prediction, which jointly modelslocation semantics and behavioral preferences via dual expert modules and history-aware routing.
Abstract: Next location prediction is a key task in human mobility modeling. Existing methods face two challenges: (1) they fail to capture the multi-faceted semantics of real-world locations; and (2) they struggle to model diverse behavioral patterns across user groups. To address these issues, we propose NextLocMoE, a large language model (LLM)-based framework for next location prediction, which integrates a dual-level Mixture-of-Experts (MoE) architecture. It comprises two complementary modules: a Location Semantics MoE at the embedding level to model multi-functional location semantics, and a Personalized MoE within LLM’s Transformer layers to adaptively capture user behavior patterns. To enhance routing stability and reliability, we introduce a historical-aware router that integrates long-term historical trajectories into expert selection. Experiments on multiple real-world datasets demonstrate that NextLocMoE significantly outperforms existing methods in terms of accuracy, transferability, and interpretability. Code is available at: https://anonymous.4open.science/r/NextLocMOE-BAC8.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 86
Loading