Icy: A benchmark for measuring compositional inductive bias of emergent communication modelsDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: emergent communication, compositionality, metrics, language model
Abstract: We present a benchmark \textsc{Icy} for measuring the compositional inductive bias of models in the context of emergent communications. We devise corrupted compositional grammars that probe for limitations in the compositional inductive bias of frequently used models. We use these corrupted compositional grammars to compare and contrast a wide range of models. We propose a hierarchical model, HU-RNN, which might show an inductive bias towards relocatable atomic groups of tokens, thus potentially encouraging the emergence of words. We experiment with probing for the compositional inductive bias of sender networks in isolation, and also placed end-to-end, with a receiver, as an auto-encoder. We propose a metric of compositionality, Compositional Entropy, that is fast to calculate, and broadly applicable.
One-sentence Summary: \textsc{Icy} is a benchmark for measuring the compositional inductive bias of models in the context of emergent communications.
Supplementary Material: zip
27 Replies

Loading