Spatial Cognition from Egocentric Video: Out of Sight, Not Out of Mind

Published: 23 Mar 2025, Last Modified: 24 Mar 20253DV 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Egocentric Video, 3D Understanding
TL;DR: From an egocentric video, we propose the task Out of Sight, Not Out of Mind, where the 3D locations of all active objects are known when they are both in- and out-of-sight.
Abstract: As humans move around, performing their daily tasks, they are able to recall where they have positioned objects in their environment, even if these objects are currently out of their sight. In this paper, we aim to mimic this spatial cognition ability. We thus formulate the task of *Out of Sight, Not Out of Mind* -- 3D tracking active objects using observations captured through an egocentric camera. We introduce a simple but effective approach to address this challenging problem, called Lift, Match, and Keep (LMK). LMK **lifts** partial 2D observations to 3D world coordinates, **matches** them over time using visual appearance, 3D location and interactions to form object tracks, and **keeps** these object tracks even when they go out-of-view of the camera. We benchmark LMK on 100 long videos from EPIC-KITCHENS. Our results demonstrate that spatial cognition is critical for correctly locating objects over short and long time scales. E.g., for one long egocentric video, we estimate the 3D location of 50 active objects. After 120 seconds, 57\% of the objects are correctly localized by LMK, compared to just 33\% by a recent 3D method for egocentric videos and 17\% by a general 2D tracking method.
Supplementary Material: pdf
Submission Number: 27
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview