EgoSim: An Egocentric Multi-view Simulator and Real Dataset for Body-worn Cameras during Motion and Activity
Keywords: egocentric perception, body-worn cameras, human pose estimation, simulator, data generation, synthetic data
TL;DR: We propose a novel simulator and dataset for pose estimation with egocentric body worn cameras.
Abstract: Research on egocentric tasks in computer vision has mostly focused on head-mounted cameras, such as fisheye cameras or embedded cameras inside immersive headsets.
We argue that the increasing miniaturization of optical sensors will lead to the prolific integration of cameras into many more body-worn devices at various locations.
This will bring fresh perspectives to established tasks in computer vision and benefit key areas such as human motion tracking, body pose estimation, or action recognition---particularly for the lower body, which is typically occluded.
In this paper, we introduce EgoSim, a novel simulator of body-worn cameras that generates realistic egocentric renderings from multiple perspectives across a wearer's body.
A key feature of EgoSim is its use of real motion capture data to render motion artifacts, which are especially noticeable with arm- or leg-worn cameras.
In addition, we introduce MultiEgoView, a dataset of egocentric footage from six body-worn cameras and ground-truth full-body 3D poses during several activities:
119 hours of data are derived from AMASS motion sequences in four high-fidelity virtual environments, which we augment with 5 hours of real-world motion data from 13 participants using six GoPro cameras and 3D body pose references from an Xsens motion capture suit.
We demonstrate EgoSim's effectiveness by training an end-to-end video-only 3D pose estimation network.
Analyzing its domain gap, we show that our dataset and simulator substantially aid training for inference on real-world data.
EgoSim code & MultiEgoView dataset: https://siplab.org/projects/EgoSim
Submission Number: 1772
Loading