A Cyber-Physical System for Freeway Ramp Meter Signal Control Using Deep Reinforcement Learning in a Connected EnvironmentDownload PDFOpen Website

2021 (modified: 14 Jun 2022)ITSC 2021Readers: Everyone
Abstract: Freeway bottlenecks such as on-ramp merging areas account for about 40% of recurring freeway congestion. It is generally agreed that building more roads and adding more lanes to existing infrastructure does not solve the congestion problem, and so dynamic traffic control measures offer a more cost-effective alternative. Ramp meters, traffic signal devices that regulate traffic flow entering freeways, are among the most effective measures to mitigate congestion at on-ramp merging areas on freeways. The confluence of deep reinforcement learning (RL) and connectivity provides a possible solution to advance ramp meter signal control. Deep RL is a group of machine-learning methods that enables an agent learning from the environment to improve its performance. In this study, three deep RL methods-proximal policy optimization (PPO), Ape-X deep Q-network (DQN), and asynchronous advantage actor-critic agents (A3C)-are explored for ramp meter signal control to maximize vehicle speed and traffic throughput, as well as to minimize energy consumption and emissions at freeway on-ramp merging areas in a connected environment. The low computational requirement and scalability of deep RL for deployment make it a powerful optimization tool for time-sensitive applications such as ramp meter signal control. The results of this study show that deep RL methods yield superior performance to both a fixed-time controller and ALINE A, a state-of-the-art feedback controller.
0 Replies

Loading