Reinforcement Communication Learning in Different Social Network StructuresDownload PDF

Published: 17 Jul 2020, Last Modified: 22 Oct 2023LaReL 2020Readers: Everyone
Abstract: Social network structure is one of the key determinants of human language evolution. Previous work has shown that the network of social interactions shapes decentralized learning in human groups, leading to the emergence of different kinds of communicative conventions. We examined the effects of social network organization on the properties of communication systems emerging in decentralized, multi-agent reinforcement learning communities. We found that the global connectivity of a social network drives the convergence of populations on shared and symmetric communication systems, preventing the agents from forming many local ``dialects". Moreover, the agent's degree is inversely related to the consistency of its use of communicative conventions. These results show the importance of the basic properties of social network structure on reinforcement communication learning and suggest a new interpretation of findings on human convergence on word conventions.
TL;DR: Social network organization affects the properties of communication systems emerging in multi-agent reinforcement learning settings.
Keywords: language evolution, multi-agent reinforcement learning, graphs, social topology, deep reinforcement learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2007.09820/code)
1 Reply

Loading