Hierarchical Latent Action Model
Keywords: Robot Learning, Latent Action Model
Abstract: Latent Action Models (LAMs) enable learning from actionless data for applications ranging from robotic control to interactive world models. However, existing LAMs typically focus on short-horizon frmaes transitions and capture low-level motion while overlooking longer-term temporal structure. In contrast, actionless videos often contain temporally extended and high-level skills. We present HiLAM, a hierarchical latent action model that discover latent skills by modeling long-term temporal information. To capture these dependencies across long horizon, we utilize pretrained LAM as a low-level extractor. This architecture aggregates latent actions sequences, which contain the underlying dynamic patterns of the video, into high-level latent skills. Our experiments demonstrate that HiLAM improves over baseline and exhibits robust dynamic skill discovery.
Submission Number: 50
Loading