Towards neural foundation models for vision: Aligning EEG, MEG and fMRI representations to perform decoding, encoding and modality conversion

ICLR 2024 Workshop Re-Align Submission15 Authors

Published: 02 Mar 2024, Last Modified: 23 Apr 2024ICLR 2024 Workshop Re-Align PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 9 pages)
Keywords: representation alignment, decoding, encoding, modality conversion
TL;DR: We build a framework to align representation of neural data and visual stimuli, performing brain encoding and decoding across different modalities
Abstract: This paper presents a novel approach towards to the creation of a foundational model for aligning neural data and visual stimuli representations by leveraging the power of contrastive learning. We worked with EEG, MEG and fMRI. The capabilities of our framework are showcased through three key experiments: decoding visual information from neural data, encoding images into neural representations, and converting between neural modalities. The results demonstrate the model's ability to accurately capture semantic information across different brain imaging techniques, illustrating its potential in decoding, encoding, and modality conversion tasks.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 15
Loading