∆ DELTA: Language Diffusion-based EEG-to-Text Architecture

Published: 23 Sept 2025, Last Modified: 24 Nov 2025NeurIPS 2025 Workshop BrainBodyFMEveryoneRevisionsBibTeXCC BY 4.0
Keywords: EEG-to-Text, Residual Vector Quantization, Language Diffusion Model, Discrete Tokenization, Multimodal Brain-Language Learning
TL;DR: We propose DELTA, an EEG-to-text architecture that converts continuous brain signals into multi-layer discrete tokens and uses a non-sequential diffusion model to reconstruct text, overcoming the cumulative errors of traditional sequential methods.
Abstract: Electroencephalogram (EEG)-to-text remains challenging due to high-dimensional noise, subject variability, and error accumulation in autoregressive decoding. We in- troduce DELTA, which pairs a Residual Vector Quantization (RVQ) EEG tokenizer with a masked language diffusion model (LLaDA). RVQ discretizes continuous EEG into multi-layer tokens to reduce noise and individual differences, while LLaDA reconstructs sentences via non-sequential denoising. On ZuCo, DELTA improves semantic alignment by up to 5.37 points over autoregressive baselines, achieving BLEU-1 21.9 and ROUGE-1 F 17.2 under word-level conditions. These results enable reliable text generation from small EEG-text datasets and point toward scalable multimodal EEG-language models.
Submission Number: 42
Loading