Analysis of Predictive Coding Models for Phonemic Representation Learning in Small DatasetsDownload PDF

Published: 02 Jul 2020, Last Modified: 05 May 2023SAS 2020Readers: Everyone
Abstract: Neural network models using predictive coding are interesting from the viewpoint of computational modelling of human language acquisition, where the objective is to understand how linguistic units could be learned from speech without any labels. Even though several promising predictive coding -based learning algorithms have been proposed in the literature, it is currently unclear how well they generalise to different languages and training dataset sizes. In addition, despite that such models have shown to be effective phonemic feature learners, it is unclear whether minimisation of the predictive loss functions of these models also leads to optimal phoneme-like representations. The present study investigates the behaviour of two predictive coding models, Autoregressive Predictive Coding and Contrastive Predictive Coding, in a phoneme discrimination task (ABX task) for two languages with different dataset sizes. Our experiments show a strong correlation between the autoregressive loss and the phoneme discrimination scores with the two datasets. However, to our surprise, the CPC model shows rapid convergence already after one pass over the training data, and, on average, its representations outperform those of APC on both languages.
Keywords: Self-supervised Learning, Representation Learning, Predictive Coding, Phonemic Learning, ICML
Double Submission: No
TL;DR: An analysis of the correlation between the validation loss functions of predictive coding models and their performance in phoneme discrimination tasks.
3 Replies

Loading