MoveGPT: Scaling Mobility Foundation Models with Spatially-Aware Mixture of Experts

10 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Human mobility, foundation models, mixture of experts
TL;DR: We propose MoveGPT, a revolutionary foundation model that learns the universal "language" of human movement by pre-training on over a billion mobility data, establishing a new state-of-the-art across a wide range of downstream tasks.
Abstract: The success of foundation models in language has inspired a new wave of general-purpose models for human mobility. However, existing approaches struggle to scale effectively due to two fundamental limitations: a failure to use meaningful basic units to represent movement, and an inability to capture the vast diversity of patterns found in large-scale data. In this work, we develop MoveGPT, a large-scale foundation model specifically architected to overcome these barriers. MoveGPT is built upon two key innovations: (1) a unified location encoder that maps geographically disjoint locations into a shared semantic space, enabling pre-training on a billion scale; and (2) a Spatially-Aware Mixture-of-Experts Transformer that develops specialized experts to efficiently capture diverse mobility patterns. Pre-trained on billion-scale datasets, MoveGPT establishes a new state-of-the-art across a wide range of downstream tasks, achieving performance gains of up to 35\% on average. It also demonstrates strong generalization capabilities to unseen cities. Crucially, our work provides empirical evidence of scaling ability in human mobility, validating a clear path toward building increasingly capable foundation models in this domain. The source code and pre-trained models for MoveGPT are publicly available at: https://anonymous.4open.science/r/MoveGPT-FC72/.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 3627
Loading