Protein Representation Learning by Capturing Protein Sequence-Structure-Function Relationship

Published: 04 Mar 2024, Last Modified: 27 Apr 2024MLGenX 2024 SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Protein representation learning, Multimodal learning, Masked autoencoder
Abstract: The goal of protein representation learning is to extract knowledge from protein databases that can be applied to various protein-related downstream tasks. Although protein sequence, structure, and function are the three key modalities for a comprehensive understanding of proteins, existing methods for protein representation learning have utilized only one or two of these modalities due to the difficulty of capturing the asymmetric interrelationships between them. To account for this asymmetry, we introduce our novel asymmetric multi-modal masked autoencoder (AMMA). AMMA adopts (1) a unified multi-modal encoder to integrate all three modalities into a unified representation space and (2) asymmetric decoders to ensure that sequence latent features reflect structural and functional information. The experiments demonstrate that the proposed AMMA is highly effective in learning protein representations that exhibit well-aligned inter-modal relationships, which in turn makes it effective for various downstream protein-related tasks.
Submission Number: 21
Loading