Graph and text multi-modal representation learning with momentum distillation on Electronic Health Records
Abstract: Highlights•Proposing a multi-modal pretraining method that leverages unstructured textual data and medical codes.•Addressing the challenge of noisy and unreliable labels using five proxy tasks with a momentum distillation mechanism.•Demonstrating excellent performance across diverse assessments.
Loading