Capturing Visual Environment Structure Correlates with Control Performance

Published: 26 Jan 2026, Last Modified: 11 Apr 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Robot Learning, Computer Vision, Diffusion Policy
TL;DR: We tackle the problem of visual representation selection for policy learning by proposing a practical and scalable proxy task: decoding the full environment states from visual observations.
Abstract: The choice of visual representation is key to scaling generalist robot policies. However, direct evaluation via policy rollouts is expensive, even in simulation. Existing proxy metrics focus on the representation's capacity to capture narrow aspects of the visual world, like object shape, limiting generalization across environments. In this paper, we take an analytical perspective: we probe pretrained visual encoders by measuring how well they support decoding of environment state—including geometry, object structure, and physical attributes—from images. Leveraging simulation environments with access to ground-truth state, we show that this probing accuracy strongly correlates with downstream policy performance across diverse environments and learning settings, significantly outperforming prior metrics. Our study provides insight into the representational properties that support generalizable manipulation, suggesting that learning to encode full environment state is a promising objective for visual representations for control.
Supplementary Material: zip
Primary Area: applications to robotics, autonomy, planning
Submission Number: 4025
Loading