Keywords: bio-inspired learning, human vision, egocentric learning, self-supervised learning
TL;DR: Prioritizing central, gaze-predicted regions and temporal context in egocentric videos improves semantic object representations in self-supervised models, aligning them more closely with human visual learning.
Abstract: Humans acquire semantic object representations from egocentric visual streams with minimal supervision. Importantly, the visual system processes with high resolution only the center of its field of view and learns similar representations for visual inputs occurring close in time. This emphasizes slowly changing information around gaze locations. This study investigates the role of central vision and slowness learning in the formation of semantic object representations in humans. We simulate five months of human-like visual experience using the Ego4D dataset and generate gaze coordinates with a state-of-the-art gaze prediction model. Using these predictions, we extract crops that mimic central vision and train a time-contrastive Self-Supervised Learning model on them. Our results show that combining temporal slowness and central vision improves the encoding of different semantic facets of object representations. Specifically, focusing on central vision strengthens the extraction of foreground object features, while considering temporal slowness, especially during fixational eye movements, allows the model to encode broader semantic information about objects.
These findings provide new insights into the mechanisms by which humans may develop semantic object representations from natural visual experience. Our code will be made public upon acceptance.
Supplementary Material: pdf
Primary Area: applications to neuroscience & cognitive science
Submission Number: 16874
Loading