Supplementary Material: zip
Track: Extended Abstract Track
Keywords: multimodal, model merging, representation learning, healthcare, biomedicine
TL;DR: Multimodal fusion and model merging method for structurally heterogeneous modalities
Abstract: Learning holistic computational representations in physical, chemical or biological systems requires the ability to process information from different distributions and modalities within the same model. While there are many available multimodal fusion and alignment approaches, most of them require end-to-end training, scale quadratically with the number of modalities, cannot handle cases of high modality imbalance in the training set, or are highly topology-specific, making them too restrictive for many biomedical learning tasks. This paper presents \emph{Multimodal Lego} (MM-Lego), a general-purpose fusion framework to turn any set of encoders into a competitive multimodal model with no or minimal fine-tuning. We achieve this by introducing a wrapper for any unimodal encoders that enforces shape consistency between modality representations and harmonises these representations by learning features in the frequency domain to enable model merging with little signal interference. We show that MM-Lego 1) can be used as a \emph{model merging} method which achieves competitive performance with end-to-end fusion models \emph{without any fine-tuning}, 2) can operate on any unimodal encoder, and 3) is a \emph{model fusion} method that, with minimal fine-tuning, achieves state-of-the-art results on six benchmarked multimodal biomedical tasks.
Submission Number: 40
Loading