Keywords: compositionality, systematicity, task, complexity
TL;DR: Variable "complexity" of inputs + least-effort pressure -> compositionality
Abstract: Compositionality is assumed to be a key property of language, but it is hard to observe in language emergence simulations. Following De Beule & Bergen (2006), we posit that the meaning of the datapoints that agents discuss must vary in complexity. We extend their work in different directions. First, we argue that this variation in the task is realistic and underlies the emergence of intersective adjectives and argument structure. Secondly, we show promising results for this hypothesis with attention-based neural networks. Thirdly, we argue that languages learned on tasks where meaning complexity varies are easier to analyse, and propose an intuitive metric called concatenability to illustrate this claim.
3 Replies
Loading