Multi-Agent Cooperation and the Emergence of (Natural) Language

Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni

Nov 04, 2016 (modified: Apr 05, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are in- terested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communi- cation. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message to the receiver, while the receiver must rely on it to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore whether the “word meanings” induced in the game reflect intuitive semantic properties of the objects depicted in the image, and we present a simple strategy for grounding the agents’ code into natural language, a necessary step in developing machines that should eventually be able to communicate with humans.
  • Conflicts: unitn.it, fb.com
  • Keywords: Natural language processing, Reinforcement Learning, Games
  • Authorids: angeliki.lazaridou@unitn.it, alexpeys@fb.com, marco.baroni@unitn.it

Loading