The Effect of Efficient Messaging and Input Variability on Neural-Agent Iterated Language LearningDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 29 Jan 2024CoRR 2021Readers: Everyone
Abstract: Natural languages display a trade-off among different strategies to convey syntactic structure, such as word order or inflection. This trade-off, however, has not appeared in recent simulations of iterated language learning with neural network agents (Chaabouni et al., 2019b). We re-evaluate this result in light of three factors that play an important role in comparable experiments from the Language Evolution field: (i) speaker bias towards efficient messaging, (ii) non systematic input languages, and (iii) learning bottleneck. Our simulations show that neural agents mainly strive to maintain the utterance type distribution observed during learning, instead of developing a more efficient or systematic language.
0 Replies

Loading