Manipulating Multi-agent Navigation Task via Emergent CommunicationsDownload PDF


22 Sept 2022, 12:42 (modified: 26 Oct 2022, 14:21)ICLR 2023 Conference Blind SubmissionReaders: Everyone
Abstract: Multi-agent corporations struggle to efficiently sustain grounded communications with a specific task goal. Existing approaches are limited in their simple task settings and single-turn communications. This work describes a multi-agent communication scenario via emergent language in a navigation task. This task involves two agents with unequal abilities: the tourist (agent A) who can only observe the surroundings and the guide (agent B) who has a holistic view but does not know the initial position of agent A. They communicate with the emerged language grounded through the environment and a common task goal: to help the tourist find the target place. We release a new dataset of 3000 scenarios that involve multi-agent visual and language navigation. We also seek to address the multi-agent emergent communications by proposing a collaborative learning framework that enables the agents to generate and understand emergent language and solve tasks. The framework is trained with reinforcement learning by maximizing the task success rate in an end-to-end manner. Results show that the proposed framework achieves competing performance in both the accuracy of language understanding and the task success rate. We also discuss the explanations of the emerged language.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
3 Replies