Abstract: Decoding inner speech from EEG signals presents a critical challenge for brain-computer interfaces (BCIs), particularly for patients with communication disorders. While motor imagery paradigms have shown success, inner speech decoding remains difficult due to its distributed neural patterns and lack of clear biomarkers. This study evaluates the EEG Conformer model’s adaptation to classify inner speech using the high-density “Inner Speech” dataset (128 channels, multiple subjects). Despite the model’s proven effectiveness for motor imagery, results revealed limitations (27.99% accuracy) stemming from inner speech’s complex spatiotemporal dynamics and data scarcity. These findings emphasize the need for both specialized architectures and larger, standardized datasets to advance clinically viable BCIs. The study underscores how dataset characteristics fundamentally constrain decoding performance, guiding future research toward data-centric solutions.
External IDs:doi:10.1007/978-3-032-08894-9_19
Loading