Advancing Drug-Target Interaction Prediction via Graph Transformers and Residual Protein Embeddings

ICLR 2025 Conference Submission13658 Authors

28 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: DTI,Transfer Learning
TL;DR: We develop a spectral decomposition of the DTI-DA transfer operator, providing insights into the modes of information transfer between domains.
Abstract: Predicting drug-target interactions (DTIs) is essential for advancing drug discovery. This paper presents a unified mathematical framework for unsupervised domain adaptation in drug-target interaction (DTI) prediction, integrating measure theory, functional analysis, information geometry, and optimal transport theory. We introduce the novel concept of DTI-Wasserstein distance, incorporating both structural and chemical similarities of drugs and targets, and establish a refined bound on the difference between source and target risks. Our information-geometric perspective reveals the intrinsic structure of the DTI model space, characterizing optimal adaptation paths as geodesics on a statistical manifold equipped with the Fisher-Rao metric. We develop a spectral decomposition of the DTI-DA transfer operator, providing insights into the modes of information transfer between domains. This leads to the introduction of DTI-spectral embedding and DTI-spectral mutual information, allowing for a more nuanced understanding of the adaptation process. Theoretical contributions include refined bounds on DTI-DA performance, incorporating task-specific considerations and spectral properties of the feature space. We prove the existence of an optimal transport map for DTI-DA and derive a novel information-theoretic lower bound using DTI-mutual information. Empirical evaluations demonstrate the superiority of our approach over existing methods across multiple benchmark datasets, showcasing its ability to effectively leverage data from diverse sources for improved DTI prediction.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13658
Loading