Agent, do you see it now? systematic generalisation in deep reinforcement learningDownload PDF

09 Mar 2022, 10:53 (modified: 18 Apr 2022, 15:40)ALOE@ICLR2022Readers: Everyone
Keywords: Systematic Generalisation, Logic, Reinforcement Learning, Convolutional Neural Networks
TL;DR: We provide evidence of agents being able to generalise systematically logical operators from a limited variety of instructions and the key role CNNs play on that
Abstract: Systematic generalisation, i.e., the algebraic capacity to understand and execute unseen tasks by combining already known primitives, is one of the most desirable features for a computational model. Good adaptation to novel tasks in open-ended settings rely heavily on the ability of agents to reuse their past experience and recombine meaningful learning pieces to tackle new goals. In this work, we analyse how the architecture of convolutional layers impacts on the performance of autonomous agents when generalising to zero-shot, unseen tasks while executing human instructions. Our findings suggest that the convolutional architecture that is correctly suited to the environment the agent will interact with, may be of greater importance than having a generic convolutional network trained in the given environment.
1 Reply