Keywords: Deep Reinforcement Learning (DRL), Variable Time Step Reinforcement Learning (VTS-RL), Adaptive Adjustment of Hyperparameters, Data Efficiency, Robotic System
TL;DR: We introduce MOSEAC, a Variable Time Step Reinforcement Learning (VTS-RL) algorithm with one additional hyperparameter. VTS-RL enables dynamic adjustment of action durations, reducing computational load by executing actions only when necessary.
Abstract: Traditional reinforcement learning (RL) methods typically employ a fixed
control loop, where each cycle corresponds to an action. This rigidity poses
challenges in practical applications, as the optimal control frequency is
task-dependent. A suboptimal choice can lead to high computational demands and
reduced exploration efficiency. Variable Time Step Reinforcement Learning
(VTS-RL) addresses these issues by using adaptive frequencies for the control
loop, executing actions only when necessary. This approach, rooted in reactive
programming principles, reduces computational load and extends the action
space by including action durations. However, VTS-RL's implementation is often
complicated by the need to tune multiple hyperparameters that govern
exploration in the multi-objective action-duration space (i.e., balancing task
performance and number of time steps to achieve a goal). To overcome these
challenges, we introduce the Multi-Objective Soft Elastic Actor-Critic
(MOSEAC) method. This method features an adaptive reward scheme that adjusts
hyperparameters based on observed trends in task rewards during training. This
scheme reduces the complexity of hyperparameter tuning, requiring a
single hyperparameter to guide exploration, thereby simplifying the learning
process and lowering deployment costs. We validate the MOSEAC method through
simulations in a Newtonian kinematics environment, demonstrating high task and
training performance with fewer time steps, ultimately lowering energy
consumption. This validation shows that MOSEAC streamlines RL algorithm
deployment by automatically tuning the agent control loop frequency using a
single parameter. Its principles can be applied to enhance any RL algorithm,
making it a versatile solution for various applications.
Submission Number: 23
Loading