Abstract: We propose a *gradient-free* deep reinforcement learning algorithm to solve *high-dimensional*, finite-horizon stochastic control problems.
Although the recently developed deep reinforcement learning framework has achieved great success in solving these problems, direct estimation of policy gradients from Monte Carlo sampling often suffers from high variance. To address this, we introduce the Momentum Consensus-Based Optimization (M-CBO) and Adaptive Momentum Consensus-Based Optimization (Adam-CBO) frameworks. These methods optimize policies using Monte Carlo estimates of the value function, rather than its gradients. Adjustable Gaussian noise supports efficient exploration, helping the algorithm converge to optimal policies in complex, nonconvex environments. Numerical results confirm the accuracy and scalability of our approach across various problem dimensions and show the potential for extension to mean-field control problems. Theoretically, we prove that M-CBO can converge to the optimal policy under some assumptions.
Lay Summary: This paper presents a new approach to solving reinforcement learning problems, where agents learn to make decisions over time without a predefined model. Our method is more efficient than existing ones and can be extended to a variety of complex domains. By improving the agent's ability to explore different solutions, our approach enhances its performance, helping it identify optimal strategies more quickly. This method holds great potential for addressing advanced problems and shows promise in real-world applications such as robotics, finance, and more.
Link To Code: https://github.com/Lyuliyao/ADAM_CBO_control
Primary Area: Reinforcement Learning
Keywords: deep reinforcement learning, optimal control, McKean-Vlasov control, consensus-based optimization
Submission Number: 308
Loading