Abstract: Losing the ability to speak due to brain injury or neurodegenerative diseases such as ALS can be debilitating. Brain-computer interfaces could potentially provide affected individuals a fast and intuitive way to communicate by decoding speech-related neural activity into a computer-synthesized voice. Current intracortical BCIs for communication using handwriting or point-and-click typing are substantially slower than natural speech and do not capture the full expressive range of speech. Recent studies have identified speech features from ECoG and sEEG recordings; however, intelligible speech synthesis has not yet been demonstrated. Our previous work has shown speech-related patterns in intracortical recordings from dorsal (arm/hand) motor cortex that enabled discrete word/phoneme classification. This motivates exploring an intracortical approach for continuous voice synthesis. Here, we present a neural decoding framework to synthesize speech by directly translating neural activity recorded from human motor cortex using intracortical multielectrode arrays into a low-dimensional speech feature space from which voice is synthesized.
Loading