ANALYZING THE ROBUSTNESS OF ADAPTIVE TRAFFIC CONTROL SYSTEM USING REINFORCEMENT LEARNING FOR URBAN TRAFFIC FLOW
Abstract: This study investigated the robustness of reinforcement learning (RL) based adaptive traffic control systems (ATCS) in managing unseen traffic patterns and conditions. This research evaluated the performance of these systems by analyzing their ability to adapt and recover to changes in traffic using the microsimulation software SUMO. Two distinct traffic scenarios were prepared in simulation to evaluate performance: a synthetic scenario based on a 4x4 grid network and a real-world scenario modeled after the city of Ingolstadt, Germany. Each scenario included various cases representing different traffic patterns and conditions such as morning rush hour, evening congestion, special events, blocked roads, and faulty sensors. Following initial training on a specific case for each scenario, various RL models representing different ATCS systems underwent evaluation on unseen traffic events. The time to recover to the optimum level of performance of an RL model after encountering an unseen event, or the recovery time and the average queue length of all non-empty lanes over each timestep were used to evaluate the robustness of these models. Results of this study indicated that RL models generally performed well in managing changes in traffic flow but faced challenges with unseen conditions such as roadblocks and sensor failures. Furthermore, models with higher recovery times resulted in larger queue accumulation when encountering unseen traffic events in long-running cases.
Loading