In reinforcement learning, all objective functions are not equal

Romain Laroche \& Harm van Seijen

Feb 12, 2018 ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: We study the learnability of value functions. We get the reward back propagation out of the way by fitting directly a deep neural network on the analytically computed optimal value function, given a chosen objective function. We show that some objective functions are easier to train than others by several magnitude orders. We observe in particular the influence of the $\gamma$ parameter and the decomposition of the task into subtasks.
  • Keywords: reinforcement learning, deep learning
  • TL;DR: In reinforcement learning, all objective functions are not equal
0 Replies

Loading