Investigating Action Encodings in Recurrent Neural Networks in Reinforcement Learning

Published: 07 Jan 2023, Last Modified: 20 Sept 2023Accepted by TMLREveryoneRevisionsBibTeX
Event Certifications: lifelong-ml.cc/CoLLAs/2023/Journal_Track
Abstract: Building and maintaining state to learn policies and value functions is critical for deploying reinforcement learning (RL) agents in the real world. Recurrent neural networks (RNNs) have become a key point of interest for the state-building problem, and several large-scale reinforcement learning agents incorporate recurrent networks. While RNNs have become a mainstay in many RL applications, many key design choices and implementation details responsible for performance improvements are often not reported. In this work, we discuss one axis on which RNN architectures can be (and have been) modified for use in RL. Specifically, we look at how action information can be incorporated into the state update function of a recurrent cell. We discuss several choices in using action information and empirically evaluate the resulting architectures on a set of illustrative domains. Finally, we discuss future work in developing recurrent cells and discuss challenges specific to the RL setting.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: N/A
Video: https://youtu.be/83vBK8DIdEY
Code: https://github.com/mkschleg/ActionRNNs.jl
Assigned Action Editor: ~Dinesh_Jayaraman2
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 367
Loading