JointMotion: Joint Self-Supervision for Joint Motion Prediction

Published: 05 Sept 2024, Last Modified: 08 Nov 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Self-supervised learning, representation learning, multimodal pre-training, motion prediction, data-efficient learning
TL;DR: Self-supervised pre-training method for joint motion prediction in self-driving vehicles.
Abstract: We present JointMotion, a self-supervised pre-training method for joint motion prediction in self-driving vehicles. Our method jointly optimizes a scene-level objective connecting motion and environments, and an instance-level objective to refine learned representations. Scene-level representations are learned via non-contrastive similarity learning of past motion sequences and environment context. At the instance level, we use masked autoencoding to refine multimodal polyline representations. We complement this with an adaptive pre-training decoder that enables JointMotion to generalize across different environment representations, fusion mechanisms, and dataset characteristics. Notably, our method reduces the joint final displacement error of Wayformer, HPTR, and Scene Transformer models by 3%, 8%, and 12%, respectively; and enables transfer learning between the Waymo Open Motion and the Argoverse 2 Motion Forecasting datasets.
Code: https://github.com/kit-mrt/future-motion
Publication Agreement: pdf
Student Paper: yes
Spotlight Video: mp4
Submission Number: 284
Loading