Disentangled Multi-Context Meta-Learning: Unlocking Robust and Generalized Task Learning

Published: 08 Aug 2025, Last Modified: 16 Sept 2025CoRL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Meta-Learning, Multi Task Learning, Quadruped Robot Locomotion
TL;DR: Explicit and selective context modeling enables robust and generalizable robot learning
Abstract: In meta-learning and its downstream tasks, many methods use implicit adaptation to represent task-specific variations. However, implicit approaches hinder interpretability and make it difficult to understand which task factors drive performance. In this work, we introduce a disentangled multi-context meta-learning framework that explicitly learns separate context vectors for different aspects that define a task. By decoupling these factors, our approach improves both robustness, through deeper task understanding, and generalization, by enabling context vector sharing across tasks with the same context. We evaluate our approach in two domains. First, on a sinusoidal regression benchmark, our model outperforms baselines on out-of-distribution tasks and generalizes to unseen sine functions by sharing context vectors associated with shared amplitudes or phase shifts. Second, in a quadruped locomotion task, we disentangle the robot-specific properties and the characteristics of the terrain in the robot dynamics model. Using these context vectors in reinforcement learning, the learned policy demonstrates improved robustness under out-of-distribution conditions, compared to a model using a single unified context. Furthermore, by effectively sharing context, our model enables successful sim-to-real policy transfer to challenging terrains with out-of-distribution robot-specific properties using only real data from flat terrain, which is not achievable with single-task adaptation.
Supplementary Material: zip
Spotlight: mp4
Submission Number: 1138
Loading