Towards Interpretable Structure Prediction With Sparse Autoencoders

Published: 06 Mar 2025, Last Modified: 09 Apr 2025ICLR 2025 Workshop MLMP PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Findings and open challenges
Keywords: protein language models, sparse autoencoders, ESM2-3B, ESMFold, mechanistic interpretability, Matryoshka architecture, hierarchical features, structure prediction, contact map prediction, model steering, solvent accessibility, biological concept discovery, feature representation, high-dimensional space, linear representations
TL;DR: We scale sparse autoencoders to ESM2-3B (ESMFold's base model) and introduce Matryoshka SAEs that learn multiscale hierarchical features, enabling interpretability and steering of protein structure predictions while maintaining performance.
Abstract: Protein language models have revolutionized structure prediction, but their nonlinear nature obscures how sequence representations inform structure prediction. While sparse autoencoders (SAEs) offer a path to interpretability here by learning linear representations in high-dimensional space, their application has been limited to smaller protein language models unable to perform structure prediction. In this work, we make two key advances: (1) we scale SAEs to ESM2-3B, the base model for ESMFold, enabling mechanistic interpretability of protein structure prediction for the first time, and (2) we adapt Matryoshka SAEs for protein language models, which learn hierarchically organized features by forcing nested groups of latents to reconstruct inputs independently. We demonstrate that our Matryoshka SAEs achieve comparable or better performance than standard architectures. Through comprehensive evaluations, we show that SAEs trained on ESM2-3B significantly outperform those trained on smaller models for both biological concept discovery and contact map prediction. Finally, we present an initial case study demonstrating how our approach enables targeted steering of ESMFold predictions, increasing structure solvent accessibility while fixing the input sequence. To facilitate further investigation by the broader community, we open-source our code, dataset, pretrained models, and visualizer.
Supplementary: https://github.com/johnyang101/reticular-sae
Presenter: ~John_Jingxuan_Yang1
Submission Number: 39
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview