The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement LearningDownload PDF

May 21, 2021 (edited Oct 07, 2021)NeurIPS 2021 SpotlightReaders: Everyone
  • Keywords: self-organization, attention, reinforcement learning, evolution strategies, zero-shot generalization, meta-learning, permutation invariance
  • TL;DR: We investigate permutation invariant RL agents achieved through modular processing of localized sensory information and attention-based communication, and demonstrate several useful properties of these agents through a series of experimental studies.
  • Abstract: In complex systems, we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment, with each individual agent acting only on locally available information, without knowing the full picture. Such systems have inspired development of artificial intelligence algorithms in areas such as swarm optimization and cellular automata. Motivated by the emergence of collective behavior from complex cellular systems, we build systems that feed each sensory input from the environment into distinct, but identical neural networks, each with no fixed relationship with one another. We show that these sensory networks can be trained to integrate information received locally, and through communication via an attention mechanism, can collectively produce a globally coherent policy. Moreover, the system can still perform its task even if the ordering of its inputs is randomly permuted several times during an episode. These permutation invariant systems also display useful robustness and generalization properties that are broadly applicable. Interactive demo and videos of our results: https://attentionneuron.github.io
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://attentionneuron.github.io
18 Replies

Loading