Average-Reward Learning and Planning with OptionsDownload PDF

21 May 2021, 20:44 (edited 27 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: average reward, options, reinforcement learning
  • TL;DR: This paper extends learning and planning algorithms within the options framework (Sutton et al. 1999) from discounted MDPs to average-reward MDPs.
  • Abstract: We extend the options framework for temporal abstraction in reinforcement learning from discounted Markov decision processes (MDPs) to average-reward MDPs. Our contributions include general convergent off-policy inter-option learning algorithms, intra-option algorithms for learning values and models, as well as sample-based planning variants of our learning algorithms. Our algorithms and convergence proofs extend those recently developed by Wan, Naik, and Sutton. We also extend the notion of option-interrupting behaviour from the discounted to the average-reward formulation. We show the efficacy of the proposed algorithms with experiments on a continuing version of the Four-Room domain.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
12 Replies

Loading