Evaluating Uncertainty-based Failure Detection for Closed-Loop LLM Planners

Published: 09 Apr 2024, Last Modified: 26 Apr 2024ICRA 2024: Back to the FutureEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM planners;uncertainty estimation; Closed-loop planning;
TL;DR: We evaluated three different ways for quantifying the uncertainty of a MLLM failure detector for closed-loop LLM planners.
Abstract: Recently, Large Language Models (LLMs) have witnessed remarkable performance as zero-shot task planners for robotic manipulation tasks. However, the open-loop nature of previous works makes LLM-based planning error-prone and fragile. On the other hand, failure detection approaches for closed-loop planning are often limited by task-specific heuristics or following an unrealistic assumption that the prediction is trustworthy all the time. In this work, we attempt to mitigate these issues by introducing a framework for closed-loop LLM-based planning called KnowLoop, backed by an uncertainty-based Multimodal Large Language Models (MLLMs) failure detector. Specifically, we evaluate three different ways for quantifying the uncertainty of MLLMs, namely token probability, entropy, and self-explained confidence as primary metrics based on three carefully designed representative prompting strategies. With a self-collected dataset including various manipulation tasks and an LLM-based robot system, our experiments demonstrate that token probability and entropy are more reflective compared to self-explained confidence. By setting an appropriate threshold to filter out uncertain predictions and seek human help actively, the accuracy of failure detection can be significantly enhanced. This improvement boosts the effectiveness of closed-loop planning and the overall success rate of tasks.
Submission Number: 10
Loading