Abstract: With the ongoing integration of distributed energy resources, modern distribution systems are getting sufficient generation capacity to perform active restoration after outages without transmission system support. The model-based approaches are widely used in resolving service restoration problems, relying on accurate system models. Deep reinforcement learning is believed as an alternative solution for problem solving, although it has not been sufficiently explored. In this article, the service restoration process is described as a partially observable Markov decision process and a multiagent graph reinforcement learning approach based on attention is proposed to train multiple agents to coachieve the restoration goal to reinforce the system resilience in coping with extreme events. To consider the connections and correlations between nodes during the service restoration, the state of the active distribution network is defined by graph data that contains features of both topology and nodes. The perceived ability of the agents is empowered by graph convolutional networks during the feature extraction, supplying agents with more comprehensive data to learn more reasonable restoration strategies. In addition, the centralized training with attention is developed for multiagent systems, focusing on the relations between the agents to strengthen the teamwork capability. The performance of the proposed method is verified by a set of comparative studies on the IEEE-118 system with dispatchable generators, rooftop photovoltaics, and energy storage systems simultaneously.
Loading