Generating Mobility Trajectories with Reinforcement Learning-Enhanced Generative Pre-trained Transformer
Keywords: Mobility trajectories; Transformer-based model; Reinforcement learning; Inverse reinforcement learning; TrajGPT-R
TL;DR: A two-phase transformer + RL/IRL framework generates realistic urban mobility trajectories, improving reliability, diversity, and planning applications.
Abstract: Mobility trajectories are essential for understanding urban dynamics and enhancing urban planning, yet access to such data is frequently hindered by privacy concerns. This research introduces a transformative framework for generating large-scale urban mobility trajectories, employing a novel application of a transformer-based model pre-trained and fine-tuned through a two-phase process. Initially, trajectory generation is conceptualized as an offline reinforcement learning (RL) problem, with a significant reduction in vocabulary space achieved during tokenization. The integration of Inverse Reinforcement Learning (IRL) allows for the capture of trajectory-wise reward signals, leveraging historical data to infer individual mobility preferences. Subsequently, the pre-trained model is fine-tuned using the constructed reward model, effectively addressing the challenges inherent in traditional RL-based autoregressive methods, such as long-term credit assignment and handling of sparse reward environments. Comprehensive evaluations on multiple datasets illustrate that our framework markedly surpasses existing models in terms of reliability and diversity. Our findings not only advance the field of urban mobility modeling but also provide a robust methodology for simulating urban data, with significant implications for traffic management and urban development planning.
Primary Area: generative models
Submission Number: 678
Loading