Abstract: Foundation models are reshaping computational pathology by enabling transfer learning, where models pre-trained on vast datasets can be adapted for downstream diagnostic, prognostic, and therapeutic
response tasks. Despite these advances, foundation models are still limited in their ability to encode
the entire gigapixel whole-slide images without additional training and often lack complementary multimodal data. Here, we introduce THREADS, a slide-level foundation model capable of generating universal representations of whole-slide images of any size. THREADS was pretrained using a multimodal
learning approach on a diverse cohort of 47,171 hematoxylin and eosin (H&E)-stained tissue sections,
paired with corresponding genomic and transcriptomic profiles—the largest such paired dataset to be
used for foundation model development to date. This unique training paradigm enables THREADS to
capture the tissue’s underlying molecular composition, yielding powerful representations applicable to
a wide array of downstream tasks. In extensive benchmarking across 54 oncology tasks, including clinical subtyping, grading, mutation prediction, immunohistochemistry status determination, treatment
response prediction and survival prediction THREADS outperformed all baselines while demonstrating
remarkable generalizability and label efficiency. It is particularly well-suited for predicting rare events,
further emphasizing its clinical utility. We intend to make the model publicly available for the broader
community.
Loading