Keywords: Language Emergence, Neural Language Emergence, Compositionality
Abstract: Humans can use natural languages compositionally, where complicated concepts are expressed using expressions grounded in simpler concepts. Hence, it has been argued that compositionality increases the ability of generalization. This behavior is acquired during natural language learning. Natural languages contain a large number of compositional phrases that function as examples of how to construct compositional expressions for human learners. However, in language emergence, neural agents do not have access to such compositional language expressions. It can be circumvent by optimizing a suitably devised metric of compositionality, which does not require supervising examples. In this paper, we present a learning environment where agents are pressured to make their emerging languages compositional by incorporating a metric of topological similarity into the loss function. We observe that when this pressure is carefully adjusted, agents can achieve higher generalizations. The optimal level of this pressure is highly dependent on the agent architecture, input, and structure of the message space. However, we find no simple correlation between high compositionality and generalization. The advantage offered by compositional pressure is situational. We observe instances where moderately compositional languages are showing generalizing behavior to the extent of some highly compositional ones.
One-sentence Summary: This paper explores the effects of externally introducing a compositionality pressure, on neural agents employed in language emergence, by considering factors such as the structure of the message space, input space, agent architecture, etc.
33 Replies
Loading