A novel approach to feedback control with deep reinforcement learning

Published: 01 Jan 2025, Last Modified: 15 May 2025Syst. Control. Lett. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We present a novel approach to feedback control design by leveraging the power of deep reinforcement learning (RL). The goal is to blend the RL methodology with mathematical analysis to extract an explicit feedback control for systems where, because of constraints or limited measurements, the classical approaches of control theory (e.g. dynamical programming, optimal control-based feedback, backstepping, etc.) cannot be used.We study a dynamical system of mosquito populations for biological pest control using the Sterile Insect Technique (SIT), a method traditionally applied in agriculture that involves releasing large numbers of sterile insects into the wild to reduce pest populations. Our goal is to derive a feedback control that globally stabilizes the system around the zero-mosquito equilibrium using only practical measurements, such as total male and female mosquito counts, rather than detailed counts of sterilized versus potent males or fecund versus unfecund females, which are often not accessible. This physical constraint presents challenges for classical methods control theory, as the full state cannot be measured. To address this, we apply deep reinforcement learning to suggest feedback laws for a discretized system that only rely on these accessible, real-world measurements, obtainable through methods like pheromone traps. Finally, we leverage the trained neural network to extract explicit feedback controls that stabilize the original continuous system over a wide range of initial conditions.Many other dynamical systems arising from practical applications are subject to measurement constraints, which render the stabilization problem complex from a mathematical perspective. We believe that this approach could help in finding new solutions to these problems.
Loading