Brain-in-the-Loop Generation: Test-Time Scaling of EEG Signals to Steer Large Language Models
Keywords: Electroencephalography (EEG); Brain-in-the-Loop Interfaces; Test-Time Scaling; Intent Recognition; Adaptive Text Generation; Large Language Models (LLMs); Human-AI Interaction
Abstract: Large language models (LLMs) are increasingly integrated into interactive systems, yet they remain limited in capturing implicit human feedback and adapting generation strategies in real time. Electroencephalography (EEG) provides a non-invasive window into neural signals that reflect user intent, satisfaction, and attention, making it a promising modality for brain-in-the-loop generation. In this work, we introduce a novel framework that leverages test-time scaling of EEG signals to steer LLM outputs dynamically. Specifically, we develop an intent recognition pipeline that decodes satisfaction-related neural activity from pre-response EEG segments, and calibrate its predictions with test-time scaling to mitigate session variability and improve reliability. The resulting confidence scores are then mapped to LLM decoding parameters—such as generation length and temperature—allowing the model to extend, shorten, or adjust responses in real time according to the user’s implicit neural state. Experiments on a 64-channel EEG dataset collected from human participants demonstrate that (i) test-time scaling significantly improves cross-session generalization and calibration of EEG-based intent decoding, and (ii) brain-in-the-loop generation produces outputs more aligned with user preferences compared to static baselines. Our findings highlight the feasibility of coupling calibrated neural decoding with adaptive large language model generation, opening new directions for human-AI interaction where the brain directly shapes the dynamics of generative models.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2026/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4732
Loading