Single Layers of Attention Suffice to Predict Protein Contacts Download PDF

Published: 25 Apr 2021, Last Modified: 05 May 2023EBM_WS@ICLR2021 PosterReaders: Everyone
Keywords: Protein Structure, Proteins, Contact Prediction, Representation Learning, Language Modeling, Attention, Transformer, BERT, Markov Random Fields, Potts Models, Self-supervised learning
TL;DR: We show a single layer of attention can achieve competitive results on protein contact prediction and provide a link between attention and Potts models to explore why.
Abstract: The established approach to unsupervised protein contact prediction estimates coevolving positions using undirected graphical models. This approach trains a Potts model on a Multiple Sequence Alignment. Increasingly large Transformers are being pretrained on unlabeled, unaligned protein sequence databases but have demonstrated mixed results for downstream tasks, including contact prediction. We argue that attention is a principled model of protein interactions, grounded in real properties of protein family data. We introduce an energy-based attention layer, factored attention, and show that it achieves comparable performance to Potts models while sharing parameters both within and across families. We contrast factored attention with the Transformer to indicate that the Transformer leverages hierarchical signal in protein family databases not captured by our single-layer models. This raises the exciting possibility for the development of powerful structured models of protein family databases.
1 Reply

Loading