Keywords: online continual learning, fair resource allocation
TL;DR: We propose FairOCL, a framework that draws inspiration from fair resource allocation in communication networks, allowing principled control over task prioritization.
Abstract: Online continual learning (OCL) aims to enable neural networks to learn sequentially from streaming data while mitigating catastrophic forgetting, a key challenge in which learning new tasks interferes with the retention of previously acquired knowledge. Although most existing approaches rely on memory buffers to replay past samples, training jointly on mixed data from different tasks often leads to gradient conflicts, which undermine model performance. To address this, we propose FairOCL, a framework that draws inspiration from fair resource allocation in communication networks. FairOCL formulates gradient aggregation across tasks as a constrained utility maximization problem and enforces fairness in the optimization process, allowing principled control over task prioritization. Extensive experiments on several standard benchmarks show that FairOCL achieves consistent improvements over state-of-the-art methods. Our code will be released.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 15235
Loading