Abstract: In this paper, we address the problem of learning and modelling the behaviours of agents in urban traffic environments such as pedestrians using their trajectories. Ex-isting state-of-the-art methods primarily rely on data-driven approaches to predict future trajectories. However, these approaches often overlook the influence of the physical environment on agents' decisions and struggle to model longer sequential trajectory data effectively. To overcome these lim-itations, we propose a novel hybrid framework in this paper that uses the attributes of the physical environment to predict the future trajectory that a travel agent might take on the road. First, we capture agents' preferences in various urban traffic environments using a deep reward learning technique. Next, leveraging the learned reward map and short past motion trajectories of the agents, we employ a probabilistic data-driven sequential model based on transformer networks to provide robust long-term forecasting of agents' trajectories. In our experiments, the proposed framework was evaluated on a large-scale real-world dataset of agents in urban traffic environments. Compared to state-of-the-art techniques, our framework achieves a substantial improvement by a significant margin.
Loading