A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In recent years, there has been a trend in the field of Reinforcement Learning (RL) towards large action models trained offline on large-scale datasets via sequence modeling. Existing models are primarily based on the Transformer architecture, which results in powerful agents. However, due to slow inference times, Transformer-based approaches are impractical for real-time applications, such as robotics. Recently, modern recurrent architectures, such as xLSTM and Mamba, have been proposed that exhibit parallelization benefits during training similar to the Transformer architecture while offering fast inference. In this work, we study the aptitude of these modern recurrent architectures for large action models. Consequently, we propose a Large Recurrent Action Model (LRAM) with an xLSTM at its core that comes with linear-time inference complexity and natural sequence length extrapolation abilities. Experiments on 432 tasks from 6 domains show that LRAM compares favorably to Transformers in terms of performance and speed.
Lay Summary: Existing large action models based on the Transformer architecture can be impractical for real-time applications, such as robotics, because they are computationally costly when being deployed. Recently, modern recurrent architectures have been introduced, which are more efficient. In this work, we study the aptitude of these modern recurrent architectures for large action models. To show this, we conduct experiments on 432 tasks from 6 domains, including simulated robotics environments and video games (such as Atari). We find that our approach compares favorably to Transformers in terms of performance and speed. Therefore, modern recurrent architectures may be a practical alternative for real world applications such as robotics.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/ml-jku/LRAM
Primary Area: Reinforcement Learning->Batch/Offline
Keywords: reinforcement learning, rnn, xlstm, mamba, multi-task, robotics
Submission Number: 7324
Loading