Interpretable agent communication from scratch (with a generic visual processor emerging on the side)Download PDF

Published: 09 Nov 2021, Last Modified: 14 Jul 2024NeurIPS 2021 PosterReaders: Everyone
Keywords: emergent communication, self-supervised learning, interpretability, multi-agent, deep neural networks
Abstract: As deep networks begin to be deployed as autonomous agents, the issue of how they can communicate with each other becomes important. Here, we train two deep nets from scratch to perform realistic referent identification through unsupervised emergent communication. We show that the largely interpretable emergent protocol allows the nets to successfully communicate even about object types they did not see at training time. The visual representations induced as a by-product of our training regime, moreover, show comparable quality, when re-used as generic visual features, to a recent self-supervised learning model. Our results provide concrete evidence of the viability of (interpretable) emergent deep net communication in a more realistic scenario than previously considered, as well as establishing an intriguing link between this field and self-supervised visual learning.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
TL;DR: Deep nets learn to communicate about realistic visual input by inducing an interpretable protocol, and high-quality general-purpose visual features on the side
Supplementary Material: pdf
Code: https://github.com/facebookresearch/EGG/tree/main/egg/zoo/emcom_as_ssl
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/interpretable-agent-communication-from/code)
15 Replies

Loading