An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural NetworksDownload PDF

28 Mar 2024 (modified: 24 Dec 2013)ICLR 2014 conference submissionReaders: Everyone
Decision: submitted, no decision
Abstract: Catastrophic forgetting is a problem faced by many machine learning models and algorithms. When trained on one task, then trained on a second task, many machine learning models 'forget'' how to perform the first task. This is widely believed to be a serious problem for neural networks. Here, we investigate the extent to which the catastrophic forgetting problem occurs for modern neural networks, comparing both established and recent gradient-based training algorithms and activation functions. We also examine the effect of the relationship between the first task and the second task on catastrophic forgetting.
19 Replies

Loading