Atlas: Universal Function Approximator For Memory RetentionDownload PDF

16 May 2022 (modified: 03 Jul 2024)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: universal function approximation, artificial neural networks, splines, catastrophic forgetting, continual learning
TL;DR: A novel ANN architecture and universal function approximator built with exponentials and B-splines to prevent catastrophic forgetting.
Abstract: Artificial neural networks (ANNs), despite their universal function approximation capability and practical success, are subject to catastrophic forgetting. Catastrophic forgetting refers to the abrupt unlearning of a previous task when a new task is learned. It is an emergent phenomenon that plagues ANNs and hinders continual learning. Existing universal function approximation theorems for ANNs guarantee function approximation ability but seldom touch on the model details and do not predict catastrophic forgetting. This paper presents a novel universal approximation theorem for multi-variable functions using only single-variable functions and exponential functions. Furthermore, we present ATLAS—a novel ANN architecture based on the exponential approximation theorem and B-splines. It is shown that ATLAS is a universal function approximator capable of memory retention and, therefore, continual learning. The memory retention of ATLAS is imperfect, with some off-target effects during continual learning, but it is well-behaved and predictable. An efficient implementation of ATLAS is provided. Experiments were conducted to evaluate both the function approximation and memory retention capabilities of ATLAS.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/atlas-universal-function-approximator-for/code)
14 Replies

Loading