Memory Efficient Invertible Neural Networks for Class-Incremental Learning

Published: 01 Jan 2021, Last Modified: 14 May 2025AICAS 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent advances in specialized hardware accelerators for Deep Neural Networks (DNN) training are opening the way for an increasing use of DNN models in embedded systems. At the same time, as new data is continuously acquired, it is becoming a major challenge to build models that are able to deal with a continuous stream of data. The issue lies in how we can update a model without storing all the previous data. In this article, we are interested in the case of learning new classes in a sequential fashion. We propose to adapt the learning procedure of one-versus-all invertible neural networks, a state-of-the-art method in class-incremental learning, to reduce its memory impact. We conduct our experiments on the CIFAR-100 dataset for which we learn each class one after the other. Our results show that the proposed approach is able to perform with a similar accuracy whilst reducing the memory cost by a factor up to five compared to the original implementation.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview