Towards Practical Hierarchical Reinforcement Learning for Multi-lane Autonomous Driving

Masoud S. Nosrati, Elmira Amirloo Abolfathi, Mohammed Elmahgiubi, Peyman Yadmellat, Jun Luo, Yunfei Zhang, Hengshuai Yao, Hongbo Zhang, Anas Jamil

Oct 12, 2018 NIPS 2018 Workshop MLITS Submission readers: everyone
  • Abstract: In this paper, we propose an approach for making hierarchical reinforcement learning practical for autonomous driving on multi-lane highway or urban structured roads. While this approach follows the conventional hierarchy of behavior decision, motion planning, and control, it introduces an intermediate layer of abstraction that specifically discretizes the state-action space for motion planning according to a given behavioral decision. This hierarchical design allows principled modular extension of motion planning, in contrast to relying on either monolithic behavior cloning or a large set of hand-written rules. We show that this design enables significantly faster learning than a flat design, when using both value-based and policy optimization methods (DQN and PPO). We also show that this design allows transferring of the trained models, without any retraining, from a simulated environment with virtually no dynamics to one with significantly more realistic dynamics. Overall, our proposed approach is a promising way to allow reinforcement learning to be applied to complex multi-lane driving in the real world. In addition, we introduce and release an open source simulator for multi-lane driving that follows the OpenAI Gym APIs and is suitable for reinforcement learning research.
  • TL;DR: Hierarchical reinforcement learning for multi-lane cruising
  • Keywords: autonomous driving, self-driving cars, hierarchical reinforcement learning, behavior planner, motion planner
0 Replies