Adaptive Regularization for Class-Incremental LearningDownload PDF

Published: 16 May 2023, Last Modified: 10 Apr 2024AutoML 2023 WorkshopReaders: Everyone
Keywords: continual learning, hyperparameter optimization, regularization
TL;DR: We auto-tune the amplitude of the regularization loss for class-incremental learning.
Abstract: Class-Incremental Learning updates a deep classifier with new categories while maintaining the previously observed class accuracy. Regularizing the neural network weights is a common method to prevent forgetting previously learned classes while learning novel ones. However, existing regularizers use a constant magnitude throughout the learning sessions, which may not reflect the varying levels of difficulty of the tasks encountered during incremental learning. This study investigates the necessity of adaptive regularization in Class-Incremental Learning, which dynamically adjusts the regularization strength according to the complexity of the task at hand. We propose a Bayesian Optimization-based approach to automatically determine the optimal regularization magnitude for each learning task. Our experiments on two datasets via two regularizers demonstrate the importance of adaptive regularization for achieving accurate and less forgetful visual incremental learning.
Submission Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Yes
CPU Hours: 0
GPU Hours: 148
TPU Hours: 0
Steps For Environmental Footprint Reduction During Development: We drastically constrained the search space we use to tune the hyper-parameter. Bigger search spaces can yield higher performance, however this is not our goal.
Estimated CO2e Footprint: 15
18 Replies

Loading