How Foundation Models are Reshaping Non-Invasive Brain–Computer Interfaces: A Case for Novel Human Expression and Alignment
Track: Artwork
Keywords: Brain-Computer Interfaces, Foundation Models, Large Language Models, Neuroaesthetics, Human–Machine interaction, AI Alignment
TL;DR: SYNAPTICON is an open, reproducible Neuro-LLM framework that decodes EEG to text and drives live audiovisuals in a closed-loop BCI, orchestrating performance while enabling rigorous tests of fidelity, governance-by-design, and human-AI co-creativity.
Abstract: SYNAPTICON is a research prototype at the intersection of neuro-hacking, non-invasive brain-computer interfaces (BCIs), and foundation models, probing new territories of human expression, neuroaesthetics, and AI alignment. Envisioning a cognitive “Panopticon” where biological and advanced synthetic intelligent systems converge, it enables a pipeline that couples temporal neural dynamics with pre-trained language models and operationalizes them in a closed loop for expression. At its core lies a live “Brain Waves-to-Natural Language-to-Aesthetics” system that translates neural states (i.e. electroencephalography (EEG)) into decoded speech, and then into immersive audiovisual output and content; shaping altered perceptual experiences and inviting audiences to directly engage with the user’s mind. SYNAPTICON provides a reproducible reference for foundation-model-assisted BCIs, suitable for advanced studies of human–machine interaction (HMI).
Thumbnail Image For Artwork: png
Video Preview For Artwork: mp4
Submission Number: 89
Loading