Sustainable Mobility Through Intelligent Traffic Signals: A Reinforcement Learning Approach to Emission Reduction and Vehicle Prioritization
Abstract: Traffic congestion and vehicular emissions remain critical challenges in urban mobility. While reinforcement learning (RL) has shown promise in adaptive traffic signal control, conventional models may inadvertently encourage private vehicle use by merely reducing delay. In this study, we present a Q-learning-based traffic signal control framework enhanced with a vehicle prioritization mechanism for public transport and emergency vehicles. Implemented using the Simulation of Urban Mobility (SUMO), our approach is evaluated on a four-arm intersection scenario. Compared to fixed-time control, the standard Q-learning model achieves an $80 \%$ reduction in average vehicle delay and over $80 \%$ decrease in $C O_{2}$ emissions. The prioritized Q-learning variant further improves delay and emissions metrics while providing preferential treatment to high-impact vehicle categories. Crucially, this prioritization strategy helps incentivize public transport usage, mitigating the risk of increased private car dependence that often follows general congestion reduction efforts. Our results demonstrate that integrating vehicle prioritization into RL-based traffic control supports both sustainability and modal shift goals in intelligent transportation systems.
External IDs:dblp:conf/wetice/IdrisC25
Loading