Keywords: Deep learning, Vision Transformer, Affinity Prediction, Protein Interaction
TL;DR: We propose ImageAM, a vision transformer with multi-image masked pretraining that better models mutation effects on antibody–antigen affinity, outperforming existing methods.
Abstract: Modeling the impact of amino acid mutations on antibody–antigen binding affinity is critical for therapeutic antibody design. Existing structure-based deep learning approaches can capture structural details of binding interfaces, but they often fail to account for subtle physicochemical perturbations introduced by mutations, limiting their ability to explain affinity shifts. To address this challenge, we present ImageAM, a mutation-aware vision transformer framework that learns from unlabeled protein-protein interaction ground-truth data. ImageAM projects multiple structural and physicochemical interface features into two-dimensional (2D) images and employs a multi-channel masked reconstruction pretraining task, enabling the model to learn mutation-induced patterns across heterogeneous contexts. This pretraining strategy equips the encoder with strong generalization ability, which is further refined through fine-tuning for antibody affinity maturation prediction. Extensive experiments on benchmark datasets demonstrate that ImageAM consistently surpasses state-of-the-art methods across multiple metrics, while exhibiting superior robustness and out-of-distribution generalization in predicting binding affinity change between mutant and wild-type complexes. Code is available at https://anonymous.4open.science/r/ImageAM-ICLR.
Supplementary Material: pdf
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 16145
Loading