ReAlnet: Achieving More Human Brain-Like Vision via Human Neural Representational Alignment

ICLR 2024 Workshop Re-Align Submission29 Authors

Published: 02 Mar 2024, Last Modified: 03 May 2024ICLR 2024 Workshop Re-Align PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 9 pages)
Keywords: Object Recognition, Human Brain-Like Model, Neural Alignment
TL;DR: We introduce "Re(presentational)Al(ignment)net", a pioneering vision model aligning with human neural activity, enhancing the similarity to human brain representations and advencing the development of more brain-like AI systems.
Abstract: Despite the remarkable strides made in artificial intelligence, current object recognition models still lag behind in emulating the mechanism of visual information processing in human brains. Recent studies have highlighted the potential of using neural data to mimic brain processing; however, these often reply on invasive neural recordings from non-human subjects, leaving a critical gap in our understanding of human visual perception and the development of more human brain-like vision models. Addressing this gap, we present, for the first time, ‘Re(presentational)Al(ignment)net’, a vision model aligned with human brain activity based on non-invasive EEG recordings, demonstrating a significantly higher similarity to human brain representations. Our innovative image-to-brain multi-layer encoding alignment framework not only optimizes multiple layers of the model, marking a substantial leap in neural alignment, but also enables the model to efficiently learn and mimic human brain’s visual representational patterns across object categories and different neural data modalities. Furthermore, we discover that alignment with human brain representations improves the model’s adversarial robustness. Our findings suggest that ReAlnet sets a new precedent in the field, bridging the gap between artificial and human vision, and paving the way for more brain-like artificial intelligence systems.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 29
Loading