Online Continual Learning for Progressive Distribution Shift (OCL-PDS): A Practitioner's Perspective

Published: 10 Mar 2023, Last Modified: 28 Apr 2023ICLR 2023 Workshop DG PosterEveryoneRevisions
Keywords: Online continual learning, Progressive distribution shift, Benchmarks and baselines, OOD generalization
TL;DR: We introduce the novel OCL-PDS problem - Online Continual Learning for Progressive Distribution Shift, and build 4 new benchmarks and implement 12 algorithms and baselines for practitioners.
Abstract: We introduce the novel OCL-PDS problem - Online Continual Learning for Progressive Distribution Shift. PDS refers to the subtle, gradual, and continuous distribution shift that widely exists in modern deep learning applications. It is widely observed in industry that PDS can cause significant performance drop. While previous work in continual learning and domain adaptation addresses this problem to some extent, our investigations from the practitioner's perspective reveal flawed assumptions that limit their applicability on daily challenges faced in real-world scenarios, and this work aims to close the gap between academic research and industry. For this new problem, we build 4 new benchmarks from the Wilds dataset, and implement 12 algorithms and baselines including both supervised and semi-supervised methods, which we test extensively on the new benchmarks. We hope that this work can provide practitioners with tools to better handle realistic PDS, and help scientists design better OCL algorithms.
Submission Number: 4
Loading