Offline Reinforcement Learning for Traffic Signal Control

TMLR Paper253 Authors

11 Jul 2022 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Traffic signal control is an important problem in urban mobility with a significant potential of economic and environmental impact. While there is a growing interest in Reinforcement Learning (RL) for traffic signal control, the work so far has focussed on learning through simulations which could lead to inaccuracies due to simplifying assumptions. Instead, real experience data on traffic is available and could be exploited at minimal costs. Recent progress in {\em offline} or {\em batch} RL has enabled just that. Model-based offline RL methods, in particular, have been shown to generalize from the experience data much better than others. We build a model-based learning framework which infers a Markov Decision Process (MDP) from a dataset collected using a cyclic traffic signal control policy that is both commonplace and easy to gather. The MDP is built with pessimistic costs to manage out-of-distribution scenarios using an adaptive shaping of rewards which is shown to provide better regularization compared to the prior related work in addition to being PAC-optimal. Our model is evaluated on a complex signalized roundabout showing that it is possible to build highly performant traffic control policies in a data efficient manner.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: (Almost) all the comments of the three reviewers have been incorporated. Detailed description is provided in our response to the reviews. Changes are marked in $\color{blue} blue$
Assigned Action Editor: ~Aleksandra_Faust1
Submission Number: 253
Loading