Designing realistic RL environment for power systems

Anonymous

17 Jan 2022 (modified: 05 May 2023)Submitted to BT@ICLR2022Readers: Everyone
Keywords: Reinforcement Learning, Power Systems, Simulation
Abstract: Power grids are critical infrastructure: ensuring they are reliable, robust and secure is essential to humanity,to everyday life, and to progress. With increasing renewable generation, growing electricity demand, and more severe weather events due to climate change, the task of maintaining efficient and robust power distribution poses a tremendous challenge to grid operators. In recent years, Reinforcement Learning (‘RL’) has shown substantial progress in solving highly complex, nonlinear problems, such as AlphaGo and it is now feasible that an RL agent could address the growing challenge of grid control. Learning to Run a Power Network (‘L2RPN’) is one competition–organized by Réseau de Transport d’Electricité and Electric Power Research Institute–aimed at testing out the capabilities of RL and other algorithms to safely control electricity transportation in power grids. In 2020, L2RPN’s winners used a Semi Markov Afterstate Actor-Critic (‘SMAAC’) approach to successfully manage a grid. L2RPN represents an important first step in commercializing AI for the power grid, but additional refinement of the RL environment is necessary to make it realistic for application in the real world.
Submission Full: zip
Blogpost Url: yml
ICLR Paper: https://openreview.net/pdf?id=LmUJqB1Cz8
3 Replies

Loading