A hierarchical sparse coding model predicts acoustic feature encoding in both auditory midbrain and cortexDownload PDFOpen Website

Published: 01 Jan 2019, Last Modified: 17 May 2023PLoS Comput. Biol. 2019Readers: Everyone
Abstract: Author summary When speech enters the ear, it is subjected to a series of processing stages prior to arriving at the auditory cortex. Neurons acting at different processing stages have different response properties. For example, at the auditory midbrain, a neuron may specifically detect the onsets of a frequency component in the speech, whereas in the auditory cortex, a neuron may specifically detect phonetic features. The encoding mechanisms underlying these neuronal functions remain unclear. To address this issue, we designed a hierarchical sparse coding model, inspired by the sparse activity of neurons in the sensory system, to learn features in speech signals. We found that the computing units in different layers exhibited hierarchical extraction of speech sound features, similar to those of neurons in the auditory midbrain and auditory cortex, although the computational principles in these layers were the same. The results suggest that sparse coding and max pooling represent universal computational principles throughout the auditory pathway.
0 Replies

Loading