Keywords: Representation learning, Brain-inspired, hippocampus, video
Abstract: Recent advances in self-supervised learning (SSL) have revolutionized computer vision through innovative architectures and learning objectives, yet they have not fully leveraged insights from biological visual processing systems. Recently, a brain-inspired SSL model named PhiNet was proposed; it is based on a ResNet backbone and operates on static image inputs with strong augmentation. In this paper, we introduce PhiNet v2, a Transformer-based architecture that processes temporal visual input (that is, sequences of images) without relying on strong augmentation to learn robust visual representations, similar to human visual processing. Our learning objective is derived from variational inference. Through extensive experimentation, we demonstrate that PhiNet v2 achieves competitive performance compared to state-of-the-art vision foundation including RSP and CropMAE, while maintaining the ability to learn from sequential input without strong data augmentation. This work represents a step toward more biologically plausible computer vision systems that process visual information in a manner more aligned with human cognitive processes.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 9970
Loading