Dynamic Event-Triggered Model-Free Reinforcement Learning for Cooperative Control of Multiagent Systems
Abstract: In this article, a novel model-free dynamic event-triggered adaptive learning control scheme is developed for continuous-time linear multiagent systems. This control scheme is different from model-based control scheme in the sense that prior knowledge of the system's model is not required. To further reduce transmission data, an event-triggered control policy based on static event-triggered mechanism (SETM) and dynamic event-triggered mechanism (DETM) is proposed. Compared to SETM, DETM may significantly produce larger average event intervals and maintain control performance. In addition, based on off-policy integral reinforcement learning, an adaptive iteration method is proposed with convergence proof. Numerical tests on both linear and nonlinear multiagent systems are conducted to demonstrate that the proposed scheme can guarantee learning performance and larger triggering intervals. Finally, the learning control scheme is tested on the multiarea power system, which can illustrate the reliability and practicality of this method. Specifically, the load frequency control problem of the multiarea power system is studied using three control schemes, revealing that DETM can achieve a better frequency response at the lowest information transmission rate and ensure the overall quality and reliability of the power system.
External IDs:dblp:journals/tr/WangTM25
Loading