Keywords: Hierarchical Reinforcement Learning, Reinforcement Learning
TL;DR: Learning abstract actions or skills at multiple temporal lengths for HRL agents
Abstract: Hierarchical reinforcement learning depends on temporally abstract actions to solve long-horizon tasks.
We propose Multi-Resolution Skills (MRS), a simple and scalable approach that constructs a discrete set of skill modules, each specialized to predict subgoals at a fixed temporal horizon (e.g., 8, 16, 32, 64 steps).
Skill encoders share parameters, causing a minimal increase in model size while allowing each module to generate plans at a distinct temporal resolution.
A learned meta-controller selects among these resolution-specific skills based on the task context; the meta-controller and skill policies are trained jointly with a single end-to-end objective in a single training phase.
We evaluate MRS on DeepMind Control Suite, Gym-Robotics, and long-horizon AntMaze tasks.
While maintaining computational efficiency, MRS consistently outperforms single-resolution baselines, yields meaningful gains over the HRL baselines in long-horizon navigation, and remains competitive with the non-hierarchical state-of-the-art (SOTA) on standard benchmarks.
Ablations show that the multi-resolution design drives the improvement, suggesting temporal partitioning of skills is a useful inductive bias for HRL.
Primary Area: reinforcement learning
Submission Number: 6744
Loading