ESDMotion: End-to-end Motion Prediction Only with SD Maps

23 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: motion prediction
TL;DR: We propose a motion prediction method only requiring SD Map, achieving on-par performance with HD Map by end-to-end learning
Abstract: Motion prediction is a crucial task in autonomous driving. Existing motion prediction models rely on high-definition (HD) maps to provide environmental context for agents. However, offline HD maps require extensive manual annotation, making them costly and unscalable. Online mapping-based methods still require HD map annotation to train the online mapping module, which is costly as well and may also suffer from the issue of out-of-distribution map elements. In this work, we explore conducting motion prediction with standard-definition (SD) maps as substitution, which are more readily available and offer broader coverage. One crucial challenge is that SD maps have low resolution and poor alignment accuracy. Directly replacing HD maps with SD maps leads to a significant drop in performance. We introduce end-to-end learning and specially tailored modules for SD maps to solve the problems. Specifically, we propose ESDMotion, the first end-to-end motion prediction framework that uses SD maps without any HD map supervision. We integrate BEV features obtained from raw sensor data into existing motion prediction models, with tailored designs for anchor-based and anchor-free models respectively. We find that the coarse and misaligned SD maps bring challenges to feature fusion of anchor-free model and on anchor generation of anchor-based model. Thus, we design two novel modules named Enhanced Road Observation and Pseudo Lane Expansion to address these issues. Benefiting from the end-to-end structure and new modules, ESDMotion outperforms the state-of-the-art online mapping-based motion prediction methods by 13.4\% in motion prediction performance and narrows the performance gap between HD and SD maps by 73\%. We will open source our code and checkpoints.
Primary Area: applications to robotics, autonomy, planning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2920
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview