Distilling Multi-modal Large Language Models for Autonomous Driving

Published: 16 Oct 2025, Last Modified: 10 Nov 2025NeurIPS 2025 ER WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: autonomous driving, large-language model, multi-modal learning
TL;DR: We propose DiMA, an end-to-end planning framework that distills the information from a multi-modal large language model to an end-to-end autonomous driving system for improved efficiency and robustness to long-tailed scenarios.
Abstract: Autonomous driving demands safe motion planning, especially in critical “long-tail” scenarios. Recent end-to-end autonomous driving systems leverage large language models (LLMs) as planners to improve generalizability to rare events. However, using LLMs at test time introduces high computational costs. To address this, we propose DiMA, an end-to-end autonomous driving system that maintains the efficiency of an LLM-free (or vision-based) planner while leveraging the world knowledge of an LLM. DiMA distills the information from a multi-modal LLM to a vision-based end-to-end planner through a set of specially designed surrogate tasks. Under a joint training strategy, a scene encoder common to both networks produces structured representations that are semantically grounded as well as aligned to the final planning objective. Notably, the LLM is optional at inference, enabling robust planning without compromising on efficiency. Training with DiMA results in a 44% trajectory error reduction in long-tail scenarios. DiMA also achieves state-of-the-art performance on the nuScenes planning benchmark.
Submission Number: 54
Loading