Self-Supervised Representation Learning for Inferring Toxicology from Multimodal Histopathology and Omics Data
Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Track: long paper (up to 10 pages)
Keywords: representation learning, toxicology, DILI
TL;DR: Multimodal representation learning on toxpath and omics for DILI predictions
Abstract: The prediction of toxicologic pathology from biological data remains a critical challenge in drug development and clinical safety assessment. Traditional approaches often rely on unimodal analyses, which fail to capture the complex interplay between morphological and molecular signatures of toxicity. In this work, we propose a self-supervised multimodal representation learning framework that integrates histopathology images and high-dimensional omics data to predict drug-induced liver injury (DILI). Our approach leverages contrastive learning for image data and masked autoencoding for omics profiles, coupled with a cross-attention fusion mechanism to dynamically weigh the importance of each modality. Pretrained on a large-scale dataset of paired histopathology and transcriptomics, our model achieves comptetitive AUC values on a held-out test set, outperforming state-of-the-art unimodal and supervised multimodal baselines. Ablation studies demonstrate the critical role of self-supervised pretraining and cross-modal attention in capturing biologically meaningful representations. Interpretability analyses reveal that the model attends to pathologically relevant regions in images and biologically significant genes, aligning with domain knowledge. This work advances the field of computational toxicology by providing a scalable, data-efficient framework for integrating multimodal biological data, with potential applications in preclinical drug safety and precision medicine.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Presenter: ~Arijit_Patra1
Format: Yes, the presenting author will attend in person if this work is accepted to the workshop.
Funding: No, the presenting author of this submission does *not* fall under ICLR’s funding aims, or has sufficient alternate funding.
Submission Number: 31
Loading