In-Depth Comparison of Regularization Methods For Long-Tailed Learning in Trajectory Prediction

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: applications to robotics, autonomy, planning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Trajectory Prediction, Long-Tailed Learning, Imbalanced Regression, Autonomous Vehicles
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We compare regularization-based long-tailed learning techniques for trajectory prediction, and provide in-depth analysis.
Abstract: Autonomous robots have the biggest potential for risk because they operate in open-ended environments where humans interact in complex, diverse ways. To operate, such systems must predict this behaviour, especially if it's part of the unexpected and potentially dangerous long tail of the dataset. Previous works on long-tailed trajectory prediction use models which do not predict a distribution of trajectories with likelihoods associated with each prediction. Furthermore, they report metrics which are biased by the ground-truth. Therefore, we aim to examine regularization methods for long-tailed trajectory prediction by comparing them on the KDE metric, which is designed to compare distributions of trajectories. Moreover, we are the first to report the performance of these methods on both the pedestrian and vehicle classes of the NuScenes dataset.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7963
Loading