Not All Errors Are Made Equal: A Regret Metric for Detecting System-level Trajectory Prediction Failures

Published: 05 Sept 2024, Last Modified: 21 Oct 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Human-Robot Interaction, Trajectory Prediction, Failure Detection
TL;DR: We formalize system-level trajectory prediction failures via regret, and automatically mine for closed-loop human-robot interactions that state-of-the-art generative human predictors and robot planners struggle with.
Abstract: Robot decision-making increasingly relies on data-driven human prediction models when operating around people. While these models are known to mispredict in out-of-distribution interactions, only a subset of prediction errors impact downstream robot performance. We propose characterizing such ``system-level'' prediction failures via the mathematical notion of regret: high-regret interactions are precisely those in which mispredictions degraded closed-loop robot performance. We further introduce a probabilistic generalization of regret that calibrates failure detection across disparate deployment contexts and renders regret compatible with reward-based and reward-free (e.g., generative) planners. In simulated autonomous driving interactions, we showcase that our system-level failure metric can automatically mine for closed-loop human-robot interactions that state-of-the-art generative human predictors and robot planners struggle with. We further find that the very presence of high-regret data during human predictor fine-tuning is highly predictive of robot re-deployment performance improvements. Furthermore, fine-tuning with the informative but significantly smaller high-regret data (23% of deployment data) is competitive with fine-tuning on the full deployment dataset, indicating a promising avenue for efficiently mitigating system-level human-robot interaction failures.
Supplementary Material: zip
Spotlight Video: mp4
Website: https://cmu-intentlab.github.io/not-all-errors/
Publication Agreement: pdf
Student Paper: yes
Submission Number: 476
Loading