Recovering Crowd Trajectories in Invisible Area of Camera Networks

Published: 01 Jan 2025, Last Modified: 15 May 2025IEEE Trans. Intell. Transp. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Understanding the movement of crowds is important to the management of public places and urban safety. Existing researches mostly focused on tracking pedestrians in video clips from a single camera or across multiple cameras (Multi-Object Tracking) by identifying individuals with similar appearance or spatial-temporal movement features. However, how crowds navigate through invisible area between cameras in crowded environments have been overlooked. Moreover, identifying individuals across camera in a crowded environment could be challenging due to cluttered pedestrian appearance and highly uncertain movements. In this paper, we focus on recovering crowd trajectories in the invisible area of sparse camera networks within crowded public environments. We achieve better spatial-temporal feature matching by estimating the most likely travel time between segmented tracklet observations of individuals with elaborate consideration of pedestrian interactions, which reduces the dependence on unreliable appearance features. Subsequently, we recover trajectories for matched tracklets in the invisible area with a high fidelity crowd simulation model. Extensive experiments on two real-world trajectory datasets show that our proposed method is superior to existing spatial-temporal based MOT methods and improves the appearance-based MOT models in terms of association accuracy and trajectory fidelity.
Loading