Biologically Plausible Sparse Temporal Word Representations

Published: 01 Jan 2024, Last Modified: 18 May 2025IEEE Trans. Neural Networks Learn. Syst. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Word representations, usually derived from a large corpus and endowed with rich semantic information, have been widely applied to natural language tasks. Traditional deep language models, on the basis of dense word representations, requires large memory space and computing resource. The brain-inspired neuromorphic computing systems, with the advantages of better biological interpretability and less energy consumption, still have major difficulties in the representation of words in terms of neuronal activities, which has restricted their further application in more complicated downstream language tasks. Comprehensively exploring the diverse neuronal dynamics of both integration and resonance, we probe into three spiking neuron models to post-process the original dense word embeddings, and test the generated sparse temporal codes on several tasks concerning both word-level and sentence-level semantics. The experimental results show that our sparse binary word representations could perform on par with or even better than original word embeddings in capturing semantic information, while requiring less storage. Our methods provide a robust representation foundation of language in terms of neuronal activities, which could potentially be applied to future downstream natural language tasks under neuromorphic computing systems.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview