A Graph Policy Network Approach for Volt-Var Control in Power Distribution SystemsDownload PDF

12 Oct 2021, 19:37 (modified: 18 Nov 2021, 21:31)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: optimization, representation learning, reinforcement learning, graph neural networks, power systems
TL;DR: We propose a framework to use graph-neural networks as policy representations to solve the volt-var control problem
Abstract: Volt-var control (VVC) is the problem of operating power distribution systems within healthy regimes by controlling actuators in power systems. Existing works have mostly adopted the conventional routine of representing the power systems (a graph with tree topology) as vectors to train deep reinforcement learning (RL) policies. We propose a framework that combines RL with graph neural networks and study the benefits and limitations of graph-based policy in the VVC setting. Our results show that graph-based policies converge to the same rewards asymptotically however at a slower rate when compared to vector representation counterpart. We conduct further analysis on the impact of both observations and actions: on the observation end, we examine the robustness of graph-based policy on two typical data acquisition errors in power systems, namely sensor communication failure and measurement misalignment. On the action end, we show that actuators have various impacts on the system, thus using a graph representation induced by power systems topology may not be the optimal choice. In the end, we conduct a case study to demonstrate that the choice of readout function architecture and graph augmentation can further improve training performance and robustness.
Supplementary Material: zip
0 Replies