ReMoBA: Representative Replay and Mixture of BatchNoise Autoencoders for Pre-Trained Model-Based Federated Domain-Incremental Learning

ICLR 2026 Conference Submission15130 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Domain-Incremental Learning, Pre-trained Models, Replay-Based Continual Learning, Physics-Inspired Representation Alignment
TL;DR: ReMoBA enables robust federated domain-incremental learning with pre-trained models by combining diverse replay, physics-based embedding alignment, and generative classification to combat forgetting and inter-client confusion.
Abstract: Federated Domain-Incremental Learning (FDIL) orchestrates model updates across multiple clients with data drawn from diverse domains. Although pre-trained models (PTMs) offer a robust initial foundation, naively adapting them in FDIL environments often leads to inter- and intra-client task confusion, in addition to catastrophic forgetting. In this work, we mathematically characterize these issues within the FDIL framework and introduce Representative Replay and Mixture of BatchNoise Autoencoders (ReMoBA), a replay-based generative approach that consolidates both representations and classifiers. Specifically, ReMoBA employs a diversity-guaranteed exemplar-selection strategy in the latent space to replay the optimally curated tiny subset of past data stored at the client side, preserving previously acquired representations while the new domain embeddings are determined for all the clients by the server globally via charged particle system energy minimization equations and repulsive force algorithm. ReMoBA further leverages a mixture of autoencoders, trained with structured noise, to enhance robustness and generalization. Extensive experiments on benchmark datasets demonstrate that ReMoBA consistently outperforms state-of-the-art FDIL methods, offering PTMs superior adaptability to new domains and mitigating inter- and intra-client task confusion. Source code will be released upon acceptance.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 15130
Loading