MaIL: Improving Imitation Learning with Selective State Space Models

Published: 05 Sept 2024, Last Modified: 22 Oct 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Imitation Learning, Sequence Models, Denoising Diffusion Policies
Abstract: This work introduces Mamba Imitation Learning (MaIL), a novel imitation learning (IL) architecture that offers a computationally efficient alternative to state-of-the-art (SoTA) Transformer policies. Transformer-based policies have achieved remarkable results due to their ability in handling human-recorded data with inherently non-Markovian behavior. However, their high performance comes with the drawback of large models that complicate effective training. While state space models (SSMs) have been known for their efficiency, they were not able to match the performance of Transformers. Mamba significantly improves the performance of SSMs and rivals against Transformers, positioning it as an appealing alternative for IL policies. MaIL leverages Mamba as a backbone and introduces a formalism that allows using Mamba in the encoder-decoder structure. This formalism makes it a versatile architecture that can be used as a standalone policy or as part of a more advanced architecture, such as a diffuser in the diffusion process. Extensive evaluations on the LIBERO IL benchmark and three real robot experiments show that MaIL: i) outperforms Transformers in all LIBERO tasks, ii) achieves good performance even with small datasets, iii) is able to effectively process multi-modal sensory inputs, iv) is more robust to input noise compared to Transformers.
Supplementary Material: zip
Spotlight Video: mp4
Code: https://github.com/ALRhub/MaIL
Publication Agreement: pdf
Student Paper: yes
Submission Number: 693
Loading