MoME: Mixture of Multi-Domain Experts for Multivariate Long-Term Series Forecasting

Xinyu Li, Zhiheng Yang, Hao Xu, Yunqi Cai, Hong Lu, Xin Wang, Jin Zhao, Fenglin Qi, Jiajie Shen

Published: 2025, Last Modified: 13 Mar 2026ICASSP 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Time series forecasting is always important, with multivariate long-term series forecasting being its most challenging task. Here, the existing methods typically learn only in a single domain and focus on optimizing model structures, leading to incomplete information mining and imprecise predictions. To address this, we propose a generalized Mixture of Multi-Domain Experts (MoME) for multivariate long-term series forecasting. Unlike most existing methods, MoME focuses on multi-perspective information mining and fusing. To this end, MoME transforms time series into the frequency and spatial domains to learn their respective representations. MoME regards variates information as embedded features and applies fast Fourier transform to the time dimension. Then it learns embedded features in the frequency domain. In spatial domain learning, MoME applies self-attention mechanism on the variates dimension to efficiently capture dependencies among multiple variates. Finally, MoME fuses the outputs from all domains, reinterprets and integrates information across multiple domains, and predicts future time series. Extensive experiments prove that MoME outperforms state-of-the-art (SOTA) methods. Code is available at: https://github.com/lxy-PhD2022/MoME
Loading