About the attractor phenomenon in decomposed reinforcement learning

Romain Laroche, Mehdi Fatemi, Joshua Romoff, Harm van Seijen

Feb 12, 2018 (modified: Jun 04, 2018) ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: We consider tackling a single-agent RL problem by decomposing it to $n$ learners. These learners are generally trained \textit{egocentrically}: they are greedy with respect to their own local focus. In this extended abstract, we show theoretically and empirically that this leads to the presence of attractors: states attracting and detaining the agent, against what the global objective function would advise.
  • Keywords: Reinforcement Learning, hierarchical reinforcement learning
  • TL;DR: We show that a local greedy optimisation for a decomposed RL problem creates an attractor phenomenon compromising the task completion.
0 Replies

Loading