Learning Representations as Resistance States: Structure-Aware Neuromorphic Sequence Modeling

15 Sept 2025 (modified: 15 Oct 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neuromorphic computing; Memristor crossbar; Resistance-state representation; Token-free sequence modeling; Tibetan NLP; Low-resource languages
TL;DR: This paper presents NeuSpell, a neuromorphic framework that learns token-free resistance-state representations in memristor crossbars, achieving state-of-the-art performance on Tibetan spelling correction with new structure-aware datasets.
Abstract: We propose NeuSpell, a neuromorphic framework that learns representations as resistance states in a memristor crossbar array, offering a non-von Neumann alternative to token-based sequence modeling. Instead of digital tokenization and GPU-accelerated pipelines, structural components of input sequences are directly encoded as conductance dynamics, enabling massively parallel, in-memory inference with ultra-low latency (0.07 ms) and near-zero energy cost. Applied to Tibetan syllable recognition, a task where conventional models struggle due to morphological complexity and data scarcity, NeuSpell achieves 98.2\% F1 on the SSC-TiM corpus and TUSA benchmark, outperforming state-of-the-art neural and large language models. Beyond this application, our results suggest that resistance-state dynamics can serve as a new foundation for structure-aware, token-free representation learning, opening a path toward efficient neuromorphic architectures that move beyond von Neumann computation.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 6218
Loading