Abstract: In this paper, we consider the problem of autonomous lane changing for self driving
vehicles in a multi-lane, multi-agent setting. We use reinforcement learning solely
to obtain a high-level policy for decision making, while the lower level action is
executed by a pre-defined controller. To obtain a comprehensive model adaptive
to as wide traffic scenarios as possible, training is carried out based on more
than 700 handcrafted traffic scenarios with various types of traffic involved. A
new asynchronous DQN architecture is proposed to handle the training samples’
diversity while improving the training efficiency. Moreover, we also present a
new state representation that contains both short range information and long range
information, to retain the merit of each individual representation while refining
them in terms of generalization ability and training efficiency. The generated policy
is evaluated on other 200 testing scenarios in a simulator, the results demonstrate
that our approach shows the better generalization ability than a rule-based baseline,
and possesses better intelligence and flexibility.
4 Replies
Loading