OpenEMMA: Open-Source Multimodal Model for End-to-End Autonomous Driving

Published: 01 Jan 2025, Last Modified: 15 Oct 2025WACV (Workshops) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Since the advent of Multimodal Large Language Models (MLLMs), they have made a significant impact across a wide range of real-world applications, particularly in Autonomous Driving (AD). Their ability to process complex visual data and reason about intricate driving scenarios has paved the way for a new paradigm in end-to-end AD systems. However, the progress of developing end-to-end models for AD has been slow, as existing fine-tuning methods demand substantial resources, including extensive computational power, large-scale datasets, and significant funding. Drawing inspiration from recent advancements in inference computing, we propose OpenEMMA, an open-source end-to-end framework based on MLLMs. By incor-porating the Chain-of- Thought reasoning process, Open-EMMA achieves significant improvements compared to the baseline when leveraging a diverse range of MLLMs. Fur-thermore, OpenEMMA demonstrates effectiveness, gener-alizability, and robustness across a variety of challenging driving scenarios, offering a more efficient and effective approach to autonomous driving. We release all the codes in https://github.com/taco-group/OpenEMMA.
Loading