From IID to the Independent Mechanisms assumption in continual learning

Published: 01 Jan 2023, Last Modified: 13 Jan 2025AAAI Bridge Program 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Current machine learning algorithms are successful in learning clearly defined tasks from large i.i.d. data. Continual learning (CL) requires learning without iid-ness and developing algorithms capable of knowledge retention and transfer, the latter can be boosted through systematic generalization. Dropping the i.i.d. assumption requires replacing it with another hypothesis. While there are several candidates, here we advocate that the independent mechanism assumption (IM) (Scho ̈lkopf et al. 2012) is a useful hypothesis for representing knowledge in a form, that makes it easy to adapt to new tasks in CL. Specifically, we review several types of distribution shifts that are common in CL and point out in which way a system that represents knowledge in form of causal modules may outperform monolithic counterparts in CL. Intuitively, the efficacy of IM solution emerges since: (i) causal modules learn mechanisms invariant across domains; (ii) if causal mechanisms must be up- dated, modularity can enable efficient and sparse updates.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview