The Paradox of Choice: On the Role of Attention in Hierarchical Reinforcement LearningDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023Attention Workshop, NeurIPS 2022 PosterReaders: Everyone
Keywords: Attention, Reinforcement Learning, Hierarchical RL, Options, Temporal Abstraction, Affordances
TL;DR: We characterize affordances as a hard-attention mechanism in hierarchical RL and investigate the role of hard versus soft attention in different scenarios, empirically demonstrating the "paradox of choice".
Abstract: Decision-making AI agents are often faced with two important challenges: the depth of the planning horizon, and the branching factor due to having many choices. Hierarchical reinforcement learning methods aim to solve the first problem, by providing shortcuts that skip over multiple time steps. To cope with the breadth, it is desirable to restrict the agent's attention at each step to a reasonable number of possible choices. The concept of affordances (Gibson, 1977) suggests that only certain actions are feasible in certain states. In this work, we first characterize "affordances" as a "hard" attention mechanism that strictly limits the available choices of temporally extended options. We then investigate the role of hard versus soft attention in training data collection, abstract value learning in long-horizon tasks, and handling a growing number of choices. To this end, we present an online, model-free algorithm to learn affordances that can be used to further learn subgoal options. Finally, we identify and empirically demonstrate the settings in which the "paradox of choice" arises, i.e. when having fewer but more meaningful choices improves the learning speed and performance of a reinforcement learning agent.
Supplementary Material: zip
0 Replies

Loading