SDS – See it, Do it, Sorted: Quadruped Skill Synthesis from Single Video Demonstration

Published: 08 Aug 2025, Last Modified: 16 Sept 2025CoRL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Skill Imitation, Robot Skill Learning, Quadrupedal Robot
TL;DR: Fully automated pipeline for quadruped locomotion skill learning from a single video via GPT-4o–generated reward functions, requiring no manual tuning.
Abstract: Imagine a robot learning locomotion skills from any single video, without labels or reward engineering. We introduce SDS ("See it. Do it. Sorted."), an automated pipeline for skill acquisition from unstructured video demonstrations. Using GPT-4o, SDS applies novel prompting techniques, in the form of spatio-temporal grid-based visual encoding (Gv) and structured input decomposition (SUS). These produce executable reward functions (RF) from raw input videos. The RFs are used to train PPO policies and are optimized through closed-loop evolution, using training footage and performance metrics as self-supervised signals. SDS allows quadrupeds (e.g., Unitree Go1) to learn four gaits—trot, bound, pace, and hop—achieving 100% gait matching fidelity, Dynamic Time Warping (DTW) distance in the order of 10^-6, and stable locomotion with zero failures, both in simulation and the real world. SDS generalizes to morphologically different quadrupeds (e.g., ANYmal) and outperforms prior work in data efficiency, training time, and engineering effort. Our code is open-source under: https://sdsreview.github.io/SDS_ANONYM/
Supplementary Material: zip
Spotlight: zip
Submission Number: 178
Loading