Reinforcement Learning for Optimized EV Charging Through Power Setpoint Tracking

Yunus Emre Yilmaz, Stavros Orfanoudakis, Pedro P. Vergara

Published: 01 Jan 2024, Last Modified: 15 Apr 2026IEEE PES Innovative Smart Grid Technologies Europe, ISGT EUROPE 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Decarbonizing the transportation sector involves adopting electric vehicles (EVs); a shift that introduces significant challenges in energy distribution management and raises concerns about grid stability. Charge Point Operators (CPOs) are important in this transition as they control the EV charging process by balancing the needs of EV users and the grid. This study presents a smart-charging model from the perspective of CPOs for handling EVs located in a commercial parking lot to minimize the Power Setpoint Tracking (PST) error. To solve this sequential decision-making problem, a Markov Decision Process (MDP) model is designed and solved using Deep Deterministic Policy Gradient (DDPG), a Deep Reinforcement Learning (DRL) algorithm. The proposed model can effectively manage the uncertainties associated with EV arrivals and fluctuating charging demands by structuring the action and state space to incorporate power constraints. The experimental evaluation using realistic EV behavior data shows that the proposed approach significantly outperforms uncontrolled charging, reducing PST error while effectively managing multiple EV chargers and EVs with varying battery capacities and power limitations.
Loading