Mimetic Initialization Helps State Space Models Learn to Recall

Published: 05 Mar 2025, Last Modified: 05 Mar 2025ICLR 2025 Workshop Weight Space Learning PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 8 pages)
Keywords: Mamba, SSM, sequence models, language models, initialization, linear attention
TL;DR: Mamba learns to recall more easily when we initialize it to be more like linear attention
Submission Number: 23
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview