DistillDrive: End-to-End Multi-Mode Autonomous Driving Distillation by Isomorphic Hetero-Source Planning Model
Abstract: End-to-end autonomous driving has been recently seen
rapid development, exerting a profound influence on both
industry and academia. However, the existing work places
excessive focus on ego-vehicle status as their sole learning
objectives and lacks of planning-oriented understanding,
which limits the robustness of the overall decision-making
prcocess. In this work, we introduce DistillDrive, an endto-end knowledge distillation-based autonomous driving
model that leverages diversified instance imitation to enhance multi-mode motion feature learning. Specifically, we
employ a planning model based on structured scene representations as the teacher model, leveraging its diversified planning instances as multi-objective learning targets
for the end-to-end model. Moreover, we incorporate reinforcement learning to enhance the optimization of stateto-decision mappings, while utilizing generative modeling
to construct planning-oriented instances, fostering intricate interactions within the latent space. We validate our
model on the nuScenes and NAVSIM datasets, achieving
a 50% reduction in collision rate and a 3-point improvement in closed-loop performance compared to the baseline model. Code and model are publicly available at
https://github.com/YuruiAI/DistillDrive
Loading