Keywords: Video Planning, State Estimation, Diffusion Model
TL;DR: A video-based planning framework that adapts online using interaction feedback, updates parameters, and rejects failed plans—enabling implicit state estimation and improved replanning in uncertain manipulation tasks.
Abstract: Video-based representations have gained prominence in planning and decision-making due to their ability to encode rich spatiotemporal dynamics and geometric relationships. These representations enable flexible and generalizable solutions for complex tasks such as object manipulation and navigation. However, existing video planning frameworks often struggle to adapt to failures at interaction time due to their inability to reason about uncertainties in partially observed environments. To overcome these limitations, we introduce a novel framework that integrates interaction-time data into the planning process. Our approach updates model parameters online and filters out previously failed plans during generation. This enables implicit state estimation, allowing the system to adapt dynamically without explicitly modeling unknown state variables. We evaluate our framework through extensive experiments on a new simulated manipulation benchmark, demonstrating its ability to improve replanning performance and advance the field of video-based decision-making.
Supplementary Material: zip
Primary Area: applications to robotics, autonomy, planning
Submission Number: 7993
Loading