Keywords: Brain-Inspired LLM, Non-Transformer Architecture
Abstract: We introduce BriLLM, the first brain-inspired large language model that establishes a genuinely biology- and neuroscience-grounded machine learning paradigm. Unlike previous approaches that primarily mimic local neural features, BriLLM implements Signal Fully-connected flowing (SiFu) learning—the first framework to authentically replicate the brain's macroscopic information processing principles at scale. Our approach is uniquely validated by two core neurocognitive facts: (1) _static semantic mapping_ to dedicated cortical regions, and (2) _dynamic signal propagation_ through electrophysiological activity. This foundation enables transformative capabilities: inherent multi-modal compatibility, full node-level interpretability, context-length independent scaling, and global-scale simulation of brain-like language processing. Our 1–2B parameter models demonstrate stable learning dynamics while replicating GPT-1-level generative performance. Scalability analysis confirms feasibility of 100–200B parameter variants. BriLLM represents a paradigm shift from representation learning toward biologically-validated AGI foundations, offering a principled solution to current AI's fundamental limitations.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 15330
Loading