Conditional Invariances for Conformer Invariant Protein RepresentationsDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023AI4Science PosterReaders: Everyone
Keywords: Invariances, Conditional Invariances, input dependent invariances, proteins, protein representation learning, conformer invariant representations, graph neural networks, group invariant neural networks
TL;DR: We propose the conditional invariance (CI) framework, which captures input-dependent transformation invariances as an add-on to existing neural network methods. We augment existing protein GNNs with CI to learn conformer invariant representations.
Abstract: Representation learning for proteins is an emerging area in geometric deep learning. Recent works have factored in both the relational (atomic bonds) and the geometric aspects (atomic positions) of the task, notably bringing together graph neural networks (GNNs) with neural networks for point clouds. The equivariances and invariances to geometric transformations (group actions such as rotations and translations) so far treats large molecules as rigid structures. However, in many important settings, proteins can co-exist as an ensemble of multiple stable conformations. The conformations of a protein, however, cannot be described as input-independent transformations of the protein: Two proteins may require different sets of transformations in order to describe their set of viable conformations. To address this limitation, we introduce the concept of conditional transformations (CT). CT can capture protein structure, while respecting the restrictions posed by constraints on dihedral (torsion) angles and steric repulsions between atoms. We then introduce a Markov chain Monte Carlo framework to learn representations that are invariant to these conditional transformations. Our results show that endowing existing baseline models with these conditional transformations helps improve their performance without sacrificing computational cost.
0 Replies

Loading