Know Where You’re Uncertain When Planning with Multimodal Foundation Models: A Formal Framework

Published: 11 Feb 2025, Last Modified: 13 May 2025MLSys 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fine-tuning, Uncertainty Estimation, Foundation Models, Large Language Models, Planning, Conformal Prediction, Formal Methods, Verification, Autonomous Systems, Active Sensing
TL;DR: A formal framework for uncertainty disentanglement, quantification, and targeted mitigation for advancing planning with multimodal foundation models.
Abstract: Multimodal foundation models offer a promising framework for robotic perception and planning by processing sensory inputs to generate actionable plans. However, addressing uncertainty in both perception (sensory interpretation) and decision-making (plan generation) remains a critical challenge for ensuring task reliability. This paper presents a comprehensive framework to disentangle, quantify, and mitigate these two forms of uncertainty. We first introduce a framework for uncertainty $\textit{disentanglement}$, isolating $\textit{perception uncertainty}$ arising from limitations in visual understanding and $\textit{decision uncertainty}$ relating to the robustness of generated plans. To quantify each type of uncertainty, we propose methods tailored to the unique properties of perception and decision-making: we use conformal prediction to calibrate perception uncertainty and introduce Formal-Methods-Driven Prediction (FMDP) to quantify decision uncertainty, leveraging formal verification techniques for theoretical guarantees. Building on this quantification, we implement two targeted $\textit{intervention}$ mechanisms: an active sensing process that dynamically re-observes high-uncertainty scenes to enhance visual input quality and an automated refinement procedure that fine-tunes the model on high-certainty data, improving its capability to meet task specifications. Empirical validation in real-world and simulated robotic tasks demonstrates that our uncertainty disentanglement framework reduces variability by up to 40\% and enhances task success rates by 5\% compared to baselines. These improvements are attributed to the combined effect of both interventions and highlight the importance of uncertainty disentanglement, which facilitates targeted interventions that enhance the robustness and reliability of autonomous systems. Webpage, videos, demo, and code: https://uncertainty-in-planning.github.io.
Supplementary Material: pdf
Submission Number: 105
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview