Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Tactical Decision Making for Lane Changing with Deep Reinforcement Learning
Mustafa Mukadam, Akansel Cosgun, Alireza Nakhaei, Kikuo Fujimura
Oct 31, 2017 (modified: Oct 31, 2017)NIPS 2017 Workshop MLITS Submissionreaders: everyone
Abstract:In this paper we consider the problem of autonomous lane changing for self driving cars in a multi-lane, multi-agent setting. We present a framework that demonstrates a more structured and data efficient alternative to end-to-end complete policy learning on problems where the high-level policy is hard to formulate using traditional optimization or rule based methods but well designed low-level controllers are available. The framework uses deep reinforcement learning solely to obtain a high-level policy for tactical decision making, while still maintaining a tight integration with the low-level controller, thus getting the best of both worlds. This is possible with Q-masking, a technique with which we are able to incorporate prior knowledge, constraints and information from a low-level controller, directly in to the learning process thereby simplifying the reward function and making learning faster and efficient. We provide preliminary results in a simulator and show our approach to be more efficient than a greedy baseline, and more successful and safer than human driving.
TL;DR:A framework that provides a policy for autonomous lane changing by learning to make high-level tactical decisions with deep reinforcement learning, and maintaining a tight integration with a low-level controller to take low-level actions.
Keywords:autonomous lane changing, decision making, deep reinforcement learning, q-learning
Enter your feedback below and we'll get back to you as soon as possible.