Reviewer: ~Jingya_Huang2
Presenter: ~Jingya_Huang2
TL;DR: Brain-to-Text BCIs show that integrated BCIs benefit when decoders convey predictions along with calibrated uncertainty and alternatives.
Abstract: Co-control between brain–computer interface (BCI) users and intelligent systems requires effective fusion across specialized modules. In Brain-to-Text BCIs, neural decoders (NDs) map neural activity to text token sequences, while language models (LMs) provide compensatory linguistic constraints when ND predictions are uncertain. Integration is typically achieved through probabilistic fusing, yet current systems are poorly calibrated: they encode some notion of confidence in the output distribution but fail to discriminate reliably between correct and incorrect predictions. Through oracle manipulations of the predicted probability distribution, while keeping the same MLE solution, across over-confident, uncertainty-aware, and alternative-rich regimes, we demonstrate that a better calibrated system can substantially improve performance. These results highlight the need for neural decoders to communicate both uncertainty and informative alternatives in order to enable robust multi-module co-control.
Length: short paper (up to 4 pages)
Domain: methods
Format Check: Yes, the presenting author will definitely attend in person because they attending NeurIPS for other complementary reasons.
Author List Check: The author list is correctly ordered and I understand that additions and removals will not be allowed after the abstract submission deadline.
Anonymization Check: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and URLs that point to identifying information.
Submission Number: 56
Loading