Decoding Predictive Inference in Visual Language Processing via Spatiotemporal Neural Coherence

Published: 23 Sept 2025, Last Modified: 25 Oct 2025NeurIPS 2025 Workshop BrainBodyFMEveryoneRevisionsBibTeXCC BY 4.0
Keywords: EEG, sign language, predictive coding, optical flow, neural coherence, visual language processing, entropy-based feature selection, deaf signers, age effects, machine learning, spatiotemporal analysis, hierarchical inference, multimodal fusion
TL;DR: We decode EEG responses to sign language using optical flow coherence and ML, showing that perceptual delays for unpredictable visual input increase with age, while prediction window for linguistic input also increases.
Abstract: Human language processing relies on the brain's capacity for predictive inference. We present a machine learning framework for decoding neural (EEG) responses to dynamic visual language stimuli in Deaf signers. Using coherence between neural signals and optical flow-derived motion features, we construct spatiotemporal representations of predictive neural dynamics. Through entropy-based feature selection, we identify frequency-specific neural signatures that differentiate interpretable linguistic input from linguistically disrupted (time-reversed) stimuli. Our results reveal distributed left-hemispheric and frontal low-frequency coherence as key features in language comprehension, with experience-dependent neural signatures correlating with age. This work demonstrates a novel multimodal approach for probing experience-driven generative models of perception in the brain.
Submission Number: 6
Loading