Hyperbolic Discounting and Learning Over Multiple HorizonsDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Deep learning, reinforcement learning, discounting, hyperbolic discounting, auxiliary tasks
TL;DR: A deep RL agent that learns hyperbolic (and other non-exponential) Q-values and a new multi-horizon auxiliary task.
Abstract: Reinforcement learning (RL) typically defines a discount factor as part of the Markov Decision Process. The discount factor values future rewards by an exponential scheme that leads to theoretical convergence guarantees of the Bellman equation. However, evidence from psychology, economics and neuroscience suggests that humans and animals instead have hyperbolic time-preferences. Here we extend earlier work of Kurth-Nelson and Redish and propose an efficient deep reinforcement learning agent that acts via hyperbolic discounting and other non-exponential discount mechanisms. We demonstrate that a simple approach approximates hyperbolic discount functions while still using familiar temporal-difference learning techniques in RL. Additionally, and independent of hyperbolic discounting, we make a surprising discovery that simultaneously learning value functions over multiple time-horizons is an effective auxiliary task which often improves over state-of-the-art methods.
Code: [![github](/images/github_icon.svg) google-research/google-research](https://github.com/google-research/google-research/tree/master/hyperbolic_discount)
Data: [Arcade Learning Environment](https://paperswithcode.com/dataset/arcade-learning-environment)
Original Pdf: pdf
11 Replies

Loading