Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks

ICLR 2024 Workshop ME-FoMo Submission98 Authors

Published: 04 Mar 2024, Last Modified: 06 May 2024ME-FoMo 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: in-context learning, transformer, state-space model
TL;DR: We present a suite of in-context learning (ICL) tasks and find that state-space models can perform ICL. We present a hybrid architecture MambaFormer that succeeds in tasks in which Transformer and Mamba fail in.
Abstract: State-space models (SSMs), such as Mamba~\cite{Gu2023mamba}, have been proposed as alternatives to Transformer networks in language modeling, incorporating gating, convolutions, and input-dependent token selection to mitigate the quadratic cost of multi-head attention. Although SSMs exhibit competitive performance, their in-context learning (ICL) capabilities, a remarkable emergent property of modern language models that enables task execution without parameter optimization, remain less explored compared to Transformers. In this study, we evaluate the ICL performance of SSMs, focusing on Mamba, against Transformer models across various tasks. Our results show that SSMs perform comparably to Transformers in standard regression ICL tasks, while outperforming them in tasks like sparse parity learning. However, SSMs fall short in tasks involving non-standard retrieval functionality. To address these limitations, we introduce a hybrid model, MambaFormer, that combines Mamba with attention blocks, surpassing individual models in tasks where they struggle independently. Our findings suggest that hybrid architectures offer promising avenues for enhancing ICL in language models.
Submission Number: 98
Loading