A Consciousness-Inspired Planning Agent for Model-Based Reinforcement LearningDownload PDF

Published: 09 Nov 2021, Last Modified: 05 May 2023NeurIPS 2021 PosterReaders: Everyone
Keywords: consciousness, planning, reinforcement learning, deep learning, model-based reinforcement learning, neuro-inspired AI, artificial intelligence, brain-inspired AI
TL;DR: We introduce into reinforcement learning inductive biases inspired by higher-order cognitive functions. These enable the planning to direct attention dynamically to the interesting parts of the state at each step of imagined future trajectories.
Abstract: We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state during planning. The agent uses a bottleneck mechanism over a set-based representation to force the number of entities to which the agent attends at each planning step to be small. In experiments, we investigate the bottleneck mechanism with several sets of customized environments featuring different challenges. We consistently observe that the design allows the planning agents to generalize their learned task-solving abilities in compatible unseen environments by attending to the relevant objects, leading to better out-of-distribution generalization performance.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://github.com/PwnerHarry/CP
21 Replies

Loading