Keywords: Reinforcement learning, action smoothness, recurrent neural networks
TL;DR: Improving action smoothness in reinforcement learning controllers through past action-state representation learning, without relying on traditional control algorithms or heuristics.
Abstract: Although deep reinforcement learning (DRL) deals with sequential decision making problems, temporal information representation is absent from state-of-the-art actor-critic algorithms. The reliance on a single observation vector, representing information from only one time step, combined with densely connected neural networks, causes instability and oscillations in action smoothness. Therefore many applied DRL robotics control methods employ various reward shaping, low-pass filter and traditional controller-based methods to mitigate this effect. However, the interactions of these different parts hinders the performance of the original goal for the RL algorithm. In this paper we present a reinforcement learning algorithm extended with past action-state representation learning (PASRL), which allows for the end-to-end training of RL-based control methods without the need for common heuristics. PASRL is evaluated on the MuJoCo benchmark, showing smoother actions that preserve exploration, eliminate the need for extensive hyperparameter tuning, and provide a simple and efficient solution for enhancing action smoothness.
Supplementary Material: zip
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7444
Loading