Abstract: We introduce UniCrowd1, a human crowd simulator for the modeling of human-related dynamics. The simulator is accompanied by a meticulously collected dataset within its synthetic environment, along with a comprehensive validation pipeline. Leveraging simulation as a powerful tool for generating annotated data, UniCrowd addresses the increasing demand for large training datasets, mimicking both the behavioral and visual aspect of crowds. Recent advancements in rendering and virtualization engines have enhanced the simulators capabilities to represent complex scenes, encompassing environmental factors such as weather conditions, surface reflectance, and human-related events like actions and behaviors. The adaptability and the non-deterministic nature of the human behavioral module of UniCrowd, coupled with its 3D rendering represents an improvement over available crowd simulators. We demonstrate the suitability of our simulator and its associated dataset for various computer vision tasks. We highlight applications such as detection and segmentation, as well as specialized tasks including crowd counting, human pose estimation, trajectory analysis and prediction.1The simulator and the dataset can be accessed at github.com/mmlabcvUniCrowd
External IDs:dblp:conf/icip/BisagnoSGNC24
Loading