CSCL-DTI: predicting drug-target interaction through cross-view and self-supervised contrastive learning

Published: 01 Jan 2024, Last Modified: 31 Jul 2025BIBM 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Accurately predicting drug-target interactions (DTI) is a critical step in drug discovery. Existing methods of DTI prediction primarily employ Simplified Molecular-Input Line-Entry System (SMILES) sequences or molecular graphs to learn drug representations. However, the features learned by such single-view approach is prone to incomplete. While some multiview methods that consider the views of both SMILES sequences and molecular graphs have been developed, these methods often fall in short in capturing potential interactions between views. In this work, we propose a novel dual contrastive learning framework CSCL-DTI for DTI prediction. First, we design a contrastive-enhanced cross-view representation learning (CVRL) to learn representations for drugs. In this module, Transformer-based and graph convolutional network (GCN)-based encoders are separately adopted to learn view-specific representations, followed by contrastive learning to enrich the representations by accounting for the potential interplay between local chemical context and topological structure. Second, we combine Transformer with self-supervised contrastive learning (SSCL) to learn representations for targets by modelling protein amino acids sequences. The scheme allows to effectively preserve the intrinsic characteristics of the sequences. Finally, we introduce a bilinear attention network to obtain an integrated representation by adaptively incorporating drug and target representations. Benchmarking experiments on two datasets demonstrated that CSCL-DTI1 outperforms six state-of-the-art methods.
Loading