MoMA: Momentum contrastive learning with multi-head attention-based knowledge distillation for histopathology image analysis
Abstract: Highlights•MoMA is an efficient and effective learning framework for computational pathology.•MoMA improves knowledge distillation and transfer on a limited pathology dataset.•MoMA outperforms other related works in learning a target model for a specific task.•We investigate MoMA for same-, relevant-, and irrelevant-task distillation scenarios.•We provide a guideline on the learning strategy when limited datasets are available.
Loading