Enhancing Language Emergence through EmpathyDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: multi-agent deep reinforcement learning, emergent communication, auxiliary tasks
TL;DR: An auxiliary prediction task can speed up learning in language emergence setups.
Abstract: The emergence of language in multi-agent settings is a promising research direction to ground natural language in simulated agents. If AI would be able to understand the meaning of language through its using it, it could also transfer it to other situations flexibly. That is seen as an important step towards achieving general AI. The scope of emergent communication is so far, however, still limited. It is necessary to enhance the learning possibilities for skills associated with communication to increase the emergable complexity. We took an example from human language acquisition and the importance of the empathic connection in this process. We propose an approach to introduce the notion of empathy to multi-agent deep reinforcement learning. We extend existing approaches on referential games with an auxiliary task for the speaker to predict the listener's mind change improving the learning time. Our experiments show the high potential of this architectural element by doubling the learning speed of the test setup.
Code: https://github.com/AnonymRobotika/ICLR2020
Original Pdf: pdf
5 Replies

Loading