Multistep multiagent reinforcement learning for optimal energy schedule strategy of charging stations in smart grid
Abstract: An efficient energy scheduling strategy of a charging station is crucial for stabilizing the electricity market and
accommodating the charging demand of electric vehicles (EVs).
Most of the existing studies on energy scheduling strategies fail
to coordinate the process of energy purchasing and distribution and, thus, cannot balance the energy supply and demand.
Besides, the existence of multiple charging stations in a complex
scenario makes it difficult to develop a unified schedule strategy
for different charging stations. In order to solve these problems, we propose a multiagent reinforcement learning (MARL)
method to learn the optimal energy purchasing strategy and an
online heuristic dispatching scheme to develop a energy distribution strategy in this article. Unlike the traditional scheduling
methods, the two proposed strategies are coordinated with each
other in both temporal and spatial dimensions to develop the unified energy scheduling strategy for charging stations. Specifically,
the proposed MARL method combines the multiagent deep
deterministic policy gradient (MADDPG) principles for learning purchasing strategy and a long short-term memory (LSTM)
neural network for predicting the charging demand of EVs.
Moreover, a multistep reward function is developed to accelerate the learning process. The proposed method is verified by
comprehensive simulation experiments based on real data of
the electricity market in Chicago. The experiment results show
that the proposed method can achieve better performance than
other state-of-the-art energy scheduling methods in the charging
market in terms of the economic profits and users’ satisfaction
ratio
Loading