Open-World Pedestrian Trajectory Prediction

17 Sept 2025 (modified: 13 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Pedestrian trajectory prediction, Pattern clustering, Open-world environment
TL;DR: We formalize Open-World Pedestrian Trajectory Prediction and propose a goal-based framework that maps trajectories to abstract motion patterns, enabling continual prediction and the detection/accommodation of novel motions.
Abstract: Most deep learning-based pedestrian trajectory prediction models are trained offline, which significantly limits their performance when encountering novel motion patterns in open-world environments. To endow trajectory prediction agents with lifelong learning, we introduce the Open-World Pedestrian Trajectory Prediction (OWPTP). OWPTP requires models to autonomously detect distribution shifts in motion patterns, continually accommodate novel pattern information, and retain previously acquired knowledge. However, motion patterns are abstract and ill-defined. Our analysis indicates that the dominant source of motion pattern discrimination arises from trajectory epistemic uncertainty tied to pedestrian goals. Based on this insight, we propose Goal-based Motion Pattern Detection and Replay (GMPDR) framework. By modeling epistemic uncertainty, GMPDR extracts pattern-related trajectory features and builds an explicit instance-to-pattern mapping through dual contrast modules to delineate motion pattern boundaries. On top of this mapping, we formulate hyperspherical novelty detection and sparse, representative replay mechanisms at the motion-pattern level. These mechanisms respectively achieve novelty detection anchored to model-defined patterns and accommodation that preserves the semantic integrity of the patterns. The framework is extensible and integrates seamlessly with various existing trajectory predictors. Experiments demonstrate that GMPDR effectively adapts to novelty and reduces forgetting. The anonymous code link is provided in the reproducibility statement.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 8364
Loading