Towards Brain-to-Text Generation: Neural Decoding with Pre-trained Encoder-Decoder ModelsDownload PDF

Published: 22 Oct 2021, Last Modified: 05 May 2023NeurIPS-AI4Science PosterReaders: Everyone
Keywords: Neural decoding, Text generation, Pre-trained model
Abstract: Decoding language from non-invasive brain signals is crucial in building widely applicable brain-computer interfaces (BCIs). However, most of the existing studies have focused on discriminating which one in two stimuli corresponds to the given brain image, which is far from directly generating text from neural activities. To move towards this, we first propose two neural decoding tasks with incremental difficulty. The first and simpler task is to predict a word given a brain image and a context, which is the first step towards text generation. And the second and more difficult one is to directly generate text from a given brain image and a prefix. Furthermore, to address the two tasks, we propose a general approach that leverages the powerful pre-trained encoder-decoder model to predict a word or generate a text fragment. Our model achieves 18.20% and 7.95% top-1 accuracy in a vocabulary of more than 2,000 words on average across all participants on the two tasks respectively, significantly outperforming their strong baselines. These results demonstrate the feasibility to directly generate text from neural activities in a non-invasive way. Hopefully, our work can promote practical non-invasive neural language decoders a step further.
Track: Original Research Track
1 Reply

Loading