Keywords: signaling game, reinforcement learning, attention, information
TL;DR: We show that simple reinforcement learning agents can still learn signaling systems even in domains with multiple channels competing for their attention.
Abstract: Signaling games are useful for understanding how language emerges. In the standard models the dynamics in some sense already knows what the signals are, even if they do not yet have meaning. In this paper we develop a simple model we call an attention game in which agents have to learn which feature in their environment is the signal. We demonstrate that simple reinforcement learning agents can still learn to coordinate in contexts in which (i) the agents do not already know what the signal is and (ii) the other features in the agents’ environment are uncorrelated with the signal. Furthermore, we show that, in the cases in which other features are correlated with the signal, there is a surprising trade-off between learning to pay attention to the signal and success in action. We show that the mutual information between a signal and a feature plays a key role in governing the accuracy and attention of the agent.