Keywords: cross-modal adaptation, partial differential equations, architecture
TL;DR: Decoder-only models fall behind on cross-modal adaptation but this can be solved with our proposed novel methods
Abstract: While large language models are primarily used on natural language tasks, they have also shown great promise when adapted to new modalities, e.g., for scientific machine learning tasks. Most proposed approaches for such cross-modal adaptation of language models focus on encoder-only transformer model architectures, despite decoder-only architectures being far more popular for language tasks in recent years, and being trained at much larger scales. This raises the question of how model architecture affects cross-modal adaptation approaches, and whether we can leverage the success of decoder-only models. In this paper, we systematically compare encoder-only and decoder-only language models on cross-modal adaptation for time-dependent simulation tasks based on partial differential equations (PDEs). We find that decoder-only models are far worse than encoder-only models, when existing approaches are applied unmodified.
In contrast to several other domains, scaling decoder-only models also does not help. To enhance the performance of decoder-only models in this context, we introduce two novel approaches that mimic bidirectionality, Parallel Flipping and Sequence Doubling. Both our methods improve overall performance using decoder-only models for all tasks and all cross-modal adaptation methods, closing the gap to encoder-only model performance. We hope that our findings broaden the spectrum of models used on cross-modal adaptation tasks to further scientific machine learning.
Journal Opt In: Yes, I want to participate in the IOP focus collection submission
Journal Corresponding Email: pgherreros@lsv.uni-saarland.de
Submission Number: 78
Loading