TL;DR: We propose STAR, a framework that advances both skill learning and composition to complete complex behaviors.
Abstract: Transforming complex actions into discrete skill abstractions has demonstrated strong potential for robotic manipulation.Existing approaches mainly leverage latent variable models, e.g., VQ-VAE, to learn skill abstractions through learned vectors (codebooks), while they suffer from codebook collapse and modeling the causal relationship between learned skills. To address these limitations, we present **S**kill **T**raining with **A**ugmented **R**otation (**STAR**), a framework that advances both skill learning and composition to complete complex behaviors. Specifically, to prevent codebook collapse, we devise rotation-augmented residual skill quantization (RaRSQ).It encodes relative angles between encoder outputs into the gradient flow by rotation-based gradient mechanism. Points within the same skill code are forced to be either pushed apart or pulled closer together depending on gradient directions.Further, to capture the casual relationship between skills, we present causal skill transformer (CST) which explicitly models dependencies between skill representations through an autoregressive mechanism for coherent action generation.Extensive experiments demonstrate the superiority of STAR on both LIBERO benchmark and realworld tasks, with around 12% improvement over the baselines.
Lay Summary: Teaching robots to perform complex, multi-step tasks like opening a drawer, placing an object inside, and closing it remains challenging because current methods struggle to break down these behaviors into reusable components. Existing approaches often fail because they either forget previously learned skills or cannot properly sequence multiple actions together.
We developed STAR, a new learning framework that teaches robots by first decomposing complex actions into discrete "skills" - like building blocks that can be combined in different ways. Our key innovation prevents the common problem where robots overlook most of their learned skills and only remember a few, ensuring they maintain a diverse toolkit. We also developed a method for robots to understand which skills should come before others, enabling proper sequencing of multi-step tasks.
We tested STAR on over 180 robotic manipulation tasks and achieved 93.6% success compared to 81.5% for previous methods. In real-world experiments, our approach successfully completed challenging sequences like drawer manipulation that require precise coordination of multiple skills.
Primary Area: Deep Learning
Keywords: Embodied AI, Robotics, Skill Learning
Submission Number: 10107
Loading