Brain-in-the-Loop Generation: Test-Time Scaling of EEG Signals to Steer Large Language Models

13 Sept 2025 (modified: 25 Sept 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Electroencephalography (EEG); Brain-in-the-Loop Interfaces; Test-Time Scaling; Intent Recognition; Adaptive Text Generation; Large Language Models (LLMs); Human-AI Interaction
Abstract: Large language models (LLMs) are increasingly integrated into interactive systems, yet they remain limited in capturing implicit human feedback and adapting generation strategies in real time. Electroencephalography (EEG) provides a non-invasive window into neural signals that reflect user intent, satisfaction, and attention, making it a promising modality for brain-in-the-loop generation. In this work, we introduce a novel framework that leverages test-time scaling of EEG signals to steer LLM outputs dynamically. Specifically, we develop an intent recognition pipeline that decodes satisfaction-related neural activity from pre-response EEG segments, and calibrate its predictions with test-time scaling to mitigate session variability and improve reliability. The resulting confidence scores are then mapped to LLM decoding parameters—such as generation length and temperature—allowing the model to extend, shorten, or adjust responses in real time according to the user’s implicit neural state. Experiments on a 64-channel EEG dataset collected from human participants demonstrate that (i) test-time scaling significantly improves cross-session generalization and calibration of EEG-based intent decoding, and (ii) brain-in-the-loop generation produces outputs more aligned with user preferences compared to static baselines. Our findings highlight the feasibility of coupling calibrated neural decoding with adaptive large language model generation, opening new directions for human-AI interaction where the brain directly shapes the dynamics of generative models.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: true
Submission Guidelines: true
Anonymous Url: true
No Acknowledgement Section: true
Submission Number: 4732
Loading