Abstract: Multimodal learning leverages information across modalities to learn better feature representations for improved performance in fusion-based tasks. However, multimodal datasets, especially in medical settings, are typically smaller in size than their unimodal counterparts, which typically impedes the performance of multimodal models. The increase in number of modalities is often associated with an overall increase in the size of the multimodal network, which may be undesirable in medical use-cases. Alternatively, utilizing smaller unimodal encoders may lead to sub-optimal performance, especially in dealing with high-dimensional clinical data. In this paper, we propose the Modality-INformed knowledge Distillation (MIND) framework, a multimodal model compression framework based on knowledge distillation that transfers knowledge from ensembles of pre-trained deep neural networks of varying sizes into a smaller multimodal student. The teacher models consist of unimodal networks, allowing the student to learn diverse representations. MIND involves multi-head joint fusion models, compared to single-head models, thereby enabling the utilization of the unimodal encoders in case of missing modalities. As a result, MIND generates an optimized multimodal model, enhancing both multimodal and unimodal representations. It can also be leveraged to balance multimodal learning during training. We evaluate MIND on binary classification and multilabel clinical prediction tasks using clinical time series data and chest X-ray images extracted from publicly available datasets. In addition, we assess the generalizability of the MIND framework on three multimodal multiclass benchmark datasets. The experimental results demonstrate that MIND improves the performance of the smaller multimodal network across all five tasks, as well as fusion methods and multimodal network architectures, with respect to several state-of-the-art baselines.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Jianbo_Jiao2
Submission Number: 3419
Loading