Fairness-Aware Model-Based Multi-Agent Reinforcement Learning for Traffic Signal ControlDownload PDF


22 Sept 2022, 12:36 (modified: 26 Oct 2022, 14:10)ICLR 2023 Conference Blind SubmissionReaders: Everyone
Keywords: Traffic signal control, reinforcement learning, fairness
TL;DR: A novel Fairness-aware Model-based Multi-agent Reinforcement Learning (FM2Light) method to improve the sample efficiency and fairness for multi-intersection control.
Abstract: Poorly timed traffic lights exacerbate traffic congestion and greenhouse gas emissions. Traffic signal control with reinforcement learning (RL) algorithms has shown great potential in dealing with such issues and improving the efficiency of traffic systems. RL-based solutions can perform better than classic rule-based methods, especially in dynamic environments. However, most of the existing RL-based solutions are model-free methods and require a large number of interactions with the environment, which can be very costly or even unacceptable in real-world scenarios. Furthermore, the fairness of multi-intersection control has been ignored in most of the previous works, which may lead to unfair congestion at different intersections. In this work, we propose a novel Fairness-aware Model-based Multi-agent Reinforcement Learning (FM2Light) method to improve the sample efficiency, thus addressing the data-expensive training, and handle unfair control in multi-intersection scenarios with a better reward design. With rigorous experiments under different real-world scenarios, we demonstrate that our method can achieve comparable asymptotic performance to model-free RL methods while achieving much higher sample efficiency and greater fairness.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)
5 Replies