Graph Multi-Agent Reinforcement Learning for Inverter-Based Active Voltage Control

Published: 01 Jan 2024, Last Modified: 25 Jan 2025IEEE Trans. Smart Grid 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Active voltage control (AVC) is a widely-used technique to improve voltage quality essential in the emerging active distribution networks (ADNs). However, the voltage fluctuation caused by intermittent renewable energy makes it difficult for traditional voltage control methods to deal with. In this paper, the voltage control problem is formulated as a decentralized partial observable Markov decision process (Dec-POMDP), and a multi-agent reinforcement learning (MARL) algorithm is developed considering each controllable device as an agent. The new formulation aims to adjust the strategies of agents to stabilize the voltage within the specified range and reduce the network loss. To better represent the mutual interaction between the agents, a graph convolutional network (GCN) is introduced. By aggregating the information of adjacent agents, complex latent features are effectively extracted by the GCN, hence promotes the generation of voltage control strategy for the agents. Meanwhile, a barrier function is applied in MARL to ensure the system voltage within a safe operation range. Comparative studies are conducted with traditional voltage control and other MARL methods on IEEE 33-bus and 141-bus systems, which demonstrate the performance of the proposed approach.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview