Keywords: Robotic Manipulation, Real-to-Sim-to-Real, Robotic Data Collection
TL;DR: We present a framework that transforms any single image into a robotic environment and enables scalable robotic data collection.
Abstract: We introduce RoLA, a framework that transforms any in‑the‑wild image into an interactive, physics‑enabled robotic environment. Unlike previous methods, RoLA operates directly on a single image without requiring additional hardware or digital assets. Our framework democratizes robotic data generation by producing massive visuomotor robotic demonstrations within minutes from a wide range of image sources, including camera captures, robotic datasets, and Internet images. At its core, our approach combines a novel method for single-view physical scene recovery with an efficient visual blending strategy for photorealistic data collection. We demonstrate RoLA's versatility across applications like scalable robotic data generation and augmentation, robot learning from internet images, and single-image real-to-sim-to-real systems for manipulators and humanoids. Video results are available at our \href{https://rola-2025.github.io/}{project page}.
Supplementary Material: zip
Spotlight: zip
Submission Number: 163
Loading