Beyond Static Perception: Integrating Temporal Context into VLMs for Cloth Folding

Published: 01 Jan 2025, Last Modified: 16 Oct 2025CoRR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Manipulating clothes is challenging due to their complex dynamics, high deformability, and frequent self-occlusions. Garments exhibit a nearly infinite number of configurations, making explicit state representations difficult to define. In this paper, we analyze BiFold, a model that predicts language-conditioned pick-and-place actions from visual observations, while implicitly encoding garment state through end-to-end learning. To address scenarios such as crumpled garments or recovery from failed manipulations, BiFold leverages temporal context to improve state estimation. We examine the internal representations of the model and present evidence that its fine-tuning and temporal context enable effective alignment between text and image regions, as well as temporal consistency.
Loading