A Consciousness-Inspired Planning Agent for Model-Based Reinforcement LearningDownload PDF

12 Oct 2021 (modified: 05 May 2023)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: consciousness, planning, reinforcement learning, deep learning
TL;DR: We introduce into reinforcement learning inductive biases inspired by higher-order cognitive functions. These enable the planning to direct attention dynamically to the interesting parts of the state at each step of imagined future trajectories.
Abstract: We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state, in order to plan and to generalize better out-of-distribution. The agent's architecture uses a set representation and a bottleneck mechanism, forcing the number of entities to which the agent attends at each planning step to be small. In experiments, we investigate the bottleneck mechanism with sets of customized environments featuring different dynamics. We consistently observe that the design allows agents to learn to plan effectively, by attending to the relevant objects, leading to better out-of-distribution generalization.
0 Replies